Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/risingwavelabs/risingwave i…
Browse files Browse the repository at this point in the history
…nto li0k/storage_split
  • Loading branch information
Li0k committed Sep 4, 2024
2 parents 6246a14 + 0dd06ff commit 32bafa7
Show file tree
Hide file tree
Showing 193 changed files with 3,630 additions and 2,230 deletions.
3 changes: 3 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion Makefile.toml
Original file line number Diff line number Diff line change
Expand Up @@ -1244,7 +1244,7 @@ cat <<EOF > "${INSTALL_PATH}"
#!/usr/bin/env bash
set -e
cd "$DIR"
./risedev "\\$@"
RISEDEV_CMD="\\$(basename \\"$0")" ./risedev "\\$@"
EOF
chmod +x "${INSTALL_PATH}"
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
<img alt="Slack" src="https://badgen.net/badge/Slack/Join%20RisingWave/0abd59?icon=slack" />
</a>
<a
href="https://twitter.com/risingwavelabs"
href="https://x.com/risingwavelabs"
target="_blank"
>
<img alt="X" src="https://img.shields.io/twitter/follow/risingwavelabs" />
Expand All @@ -46,9 +46,9 @@
</a>
</div>
RisingWave is a Postgres-compatible SQL engine engineered to provide the <i><b>simplest</b></i> and <i><b>most cost-efficient</b></i> approach for <b>processing</b>, <b>analyzing</b>, and <b>managing</b> real-time event streaming data.
RisingWave is a Postgres-compatible SQL database engineered to provide the <i><b>simplest</b></i> and <i><b>most cost-efficient</b></i> approach for <b>processing</b>, <b>analyzing</b>, and <b>managing</b> real-time event streaming data.

RisingWave can <b>ingest</b> millions of events per second, seamlessly <b>join and analyze</b> live data streams with historical tables, <b>serve</b> ad-hoc queries in real-time, and <b>deliver</b> fresh, consistent results.
RisingWave can <b>ingest</b> millions of events per second, continuously <b>join and analyze</b> live data streams with historical tables, <b>serve</b> ad-hoc queries in real-time, and <b>deliver</b> fresh, consistent results wherever needed.

![RisingWave](./docs/dev/src/images/architecture_20240814.png)

Expand All @@ -62,13 +62,13 @@ curl https://risingwave.com/sh | sh
To learn about other installation options, such as using a Docker image, see [Quick Start](https://docs.risingwave.com/docs/current/get-started/).

## When is RisingWave the perfect fit?
RisingWave is the ideal solution for building event-driven applications. Choose RisingWave when you want to:
RisingWave is the ideal solution for:

* Ingest data from real-time sources like Kafka streams, database CDC, and more.
* Perform complex queries (such as joins, aggregations, and time windowing) on the fly.
* Interactively and concurrently explore consistent, up-to-the-moment results.
* Seamlessly send results to downstream systems.
* Process streaming and batch data using the same codebase.
* Managing real-time data sources like Kafka streams, database CDC, and more.
* Executing complex, on-the-fly queries, including joins, aggregations, and time windowing.
* Interactively and concurrently exploring consistent, up-to-the-moment results.
* Seamlessly delivering results to downstream systems.
* Processing both streaming and batch data with a unified codebase.


## In what use cases does RisingWave excel?
Expand Down
38 changes: 7 additions & 31 deletions dashboard/lib/api/metric.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,10 @@
*
*/
import { Metrics, MetricsSample } from "../../components/metrics"
import { GetBackPressureResponse } from "../../proto/gen/monitor_service"
import {
BackPressureInfo,
GetBackPressureResponse,
} from "../../proto/gen/monitor_service"
import api from "./api"

export interface BackPressuresMetrics {
Expand All @@ -30,13 +33,6 @@ export async function fetchPrometheusBackPressure() {
return res
}

export interface BackPressureInfo {
actorId: number
fragmentId: number
downstreamFragmentId: number
value: number
}

export interface BackPressureRateInfo {
actorId: number
fragmentId: number
Expand All @@ -55,14 +51,8 @@ function convertToMapAndAgg(
const map = new Map<string, number>()
for (const item of backPressures) {
const key = `${item.fragmentId}-${item.downstreamFragmentId}`
if (mapValue.has(key) && mapNumber.has(key)) {
// add || tp avoid NaN and pass check
mapValue.set(key, (mapValue.get(key) || 0) + item.value)
mapNumber.set(key, (mapNumber.get(key) || 0) + 1)
} else {
mapValue.set(key, item.value)
mapNumber.set(key, 1)
}
mapValue.set(key, (mapValue.get(key) || 0) + item.value)
mapNumber.set(key, (mapNumber.get(key) || 0) + item.actorCount)
}

for (const [key, value] of mapValue) {
Expand Down Expand Up @@ -137,7 +127,7 @@ export function calculateCumulativeBp(
mapResult.forEach((value, key) => {
const [fragmentId, downstreamFragmentId] = key.split("-").map(Number)
const backPressureInfo: BackPressureInfo = {
actorId: 0,
actorCount: 1, // the value here has already been averaged by real actor count
fragmentId,
downstreamFragmentId,
value,
Expand All @@ -161,27 +151,13 @@ export function calculateBPRate(
return convertToBackPressureMetrics(convertFromMapAndAgg(result))
}

export const BackPressureInfo = {
fromJSON: (object: any) => {
return {
actorId: isSet(object.actorId) ? Number(object.actorId) : 0,
fragmentId: isSet(object.fragmentId) ? Number(object.fragmentId) : 0,
downstreamFragmentId: isSet(object.downstreamFragmentId)
? Number(object.downstreamFragmentId)
: 0,
value: isSet(object.value) ? Number(object.value) : 0,
}
},
}

// Get back pressure from meta node -> compute node
export async function fetchEmbeddedBackPressure() {
const response: GetBackPressureResponse = await api.get(
"/metrics/fragment/embedded_back_pressures"
)
let backPressureInfos: BackPressureInfo[] =
response.backPressureInfos?.map(BackPressureInfo.fromJSON) ?? []
backPressureInfos = backPressureInfos.sort((a, b) => a.actorId - b.actorId)
return backPressureInfos
}

Expand Down
2 changes: 1 addition & 1 deletion dashboard/pages/fragment_graph.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ import Title from "../components/Title"
import useErrorToast from "../hook/useErrorToast"
import useFetch from "../lib/api/fetch"
import {
BackPressureInfo,
calculateBPRate,
calculateCumulativeBp,
fetchEmbeddedBackPressure,
Expand All @@ -52,6 +51,7 @@ import {
} from "../lib/api/streaming"
import { FragmentBox } from "../lib/layout"
import { TableFragments, TableFragments_Fragment } from "../proto/gen/meta"
import { BackPressureInfo } from "../proto/gen/monitor_service"
import { Dispatcher, MergeNode, StreamNode } from "../proto/gen/stream_plan"

interface DispatcherNode {
Expand Down
4 changes: 4 additions & 0 deletions docker/dashboards/risingwave-dev-dashboard.json

Large diffs are not rendered by default.

11 changes: 11 additions & 0 deletions e2e_test/batch/transaction/now.slt
Original file line number Diff line number Diff line change
Expand Up @@ -7,26 +7,36 @@ create view v as select * from t where ts at time zone 'utc' >= now();
statement ok
create materialized view mv as select * from v;

connection txn
statement ok
SET VISIBILITY_MODE TO checkpoint;

connection txn
statement ok
insert into t select * from generate_series(
now() at time zone 'utc' - interval '10' second,
now() at time zone 'utc' + interval '20' second,
interval '1' second / 20
);

connection txn
statement ok
flush;

connection txn
statement ok
start transaction read only;

# the view should not be empty, so that the following check is meaningful
connection txn
query T
select count(*) > 0 from mv;
----
t

# the result from batch query and materialized view should be the same
skipif in-memory
connection txn
query T
select * from v
except
Expand All @@ -40,6 +50,7 @@ select * from mv;
## select * from v;
## ----

connection txn
statement ok
commit;

Expand Down
19 changes: 19 additions & 0 deletions e2e_test/batch/transaction/read_only_multi_conn.slt
Original file line number Diff line number Diff line change
@@ -1,15 +1,24 @@
statement ok
create table t (v int);

connection txn
statement ok
insert into t values (1), (2);

connection txn
statement ok
flush;

connection txn
statement ok
SET VISIBILITY_MODE TO checkpoint;

connection txn
statement ok
start transaction read only;

skipif in-memory
connection txn
query I
select count(*) from t;
----
Expand All @@ -20,6 +29,7 @@ connection other
statement ok
flush;

skipif in-memory
connection other
query I
select count(*) from t;
Expand All @@ -35,31 +45,40 @@ statement ok
flush;

# inserts are visible in the `other` connection,
skipif in-memory
connection other
query I
select count(*) from t;
----
3

# ...but not in the read-only transaction
skipif in-memory
connection txn
query I
select count(*) from t;
----
2

connection txn
statement ok
flush;

# still invisible even after flush
skipif in-memory
connection txn
query I
select count(*) from t;
----
2

connection txn
statement ok
commit;

# now visible outside the transaction
skipif in-memory
connection txn
query I
select count(*) from t;
----
Expand Down
57 changes: 57 additions & 0 deletions e2e_test/batch/types/map.slt.part
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,63 @@ select to_jsonb(m1), to_jsonb(m2), to_jsonb(m3), to_jsonb(l), to_jsonb(s) from t
{"a": 1.0, "b": 2.0, "c": 3.0} null null null null
{"a": 1.0, "b": 2.0, "c": 3.0} {"1": true, "2": false, "3": true} {"a": {"a1": "a2"}, "b": {"b1": "b2"}} [{"a": 1, "b": 2, "c": 3}, {"d": 4, "e": 5, "f": 6}] {"m": {"a": {"x": 1}, "b": {"x": 2}, "c": {"x": 3}}}

query ?
select jsonb_populate_map(
null::map(varchar, int),
'{"a": 1, "b": 2}'::jsonb
);
----
{a:1,b:2}


query ?
select jsonb_populate_map(
MAP {'a': 1, 'b': 2},
'{"b": 3, "c": 4}'::jsonb
);
----
{a:1,b:3,c:4}


# implicit cast (int -> varchar)
query ?
select jsonb_populate_map(
MAP {'a': 'a', 'b': 'b'},
'{"b": 3, "c": 4}'::jsonb
);
----
{a:a,b:3,c:4}


query error
select jsonb_populate_map(
MAP {'a': 1, 'b': 2},
'{"b": "3", "c": 4}'::jsonb
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: Expr error
2: error while evaluating expression `jsonb_populate_map('{a:1,b:2}', '{"b": "3", "c": 4}')`
3: Parse error: cannot cast jsonb string to type number


query error
select jsonb_populate_map(
null::map(int, int),
'{"a": 1, "b": 2}'::jsonb
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: Expr error
2: error while evaluating expression `jsonb_populate_map(NULL, '{"a": 1, "b": 2}')`
3: Parse error: cannot convert jsonb to a map with non-string keys



statement ok
drop table t;

Expand Down
2 changes: 1 addition & 1 deletion e2e_test/error_ui/extended/main.slt
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ db error: ERROR: Failed to execute the statement
Caused by these errors (recent errors listed first):
1: Expr error
2: error while evaluating expression `general_div('1', '0')`
3: Division by zero
3: Division by zero
2 changes: 1 addition & 1 deletion e2e_test/error_ui/simple/expr.slt
Original file line number Diff line number Diff line change
Expand Up @@ -68,4 +68,4 @@ db error: ERROR: Failed to run the query
Caused by these errors (recent errors listed first):
1: Expr error
2: error while evaluating expression `format('Hello', 'World')`
3: Unsupported function: unsupported specifier type 'L'
3: Unsupported function: unsupported specifier type 'L'
2 changes: 1 addition & 1 deletion e2e_test/error_ui/simple/license.slt
Original file line number Diff line number Diff line change
Expand Up @@ -74,4 +74,4 @@ SELECT setting FROM pg_settings WHERE name = 'license_key';
query T
SELECT rw_test_paid_tier();
----
t
t
2 changes: 1 addition & 1 deletion e2e_test/error_ui/simple/main.slt
Original file line number Diff line number Diff line change
Expand Up @@ -73,4 +73,4 @@ db error: ERROR: Failed to run the query
Caused by these errors (recent errors listed first):
1: Failed to get/set session config
2: Invalid value `maybe` for `rw_implicit_flush`
3: Invalid bool
3: Invalid bool
2 changes: 1 addition & 1 deletion e2e_test/error_ui/simple/recovery.slt
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ from error;
ok

statement ok
drop table t cascade;
drop table t cascade;
Loading

0 comments on commit 32bafa7

Please sign in to comment.