Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Improve wording of error message #17766

Merged
merged 21 commits into from
Aug 2, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
5b187ea
grammar check
WanYixian Jul 22, 2024
b0b0c2b
second batch
WanYixian Jul 25, 2024
d97af61
Update java/connector-node/risingwave-sink-cassandra/src/main/java/co…
WanYixian Jul 26, 2024
ed4f43c
Update java/connector-node/risingwave-sink-cassandra/src/main/java/co…
WanYixian Jul 26, 2024
d5db69e
Update java/connector-node/risingwave-sink-cassandra/src/main/java/co…
WanYixian Jul 26, 2024
6722b46
Update java/connector-node/risingwave-sink-cassandra/src/main/java/co…
WanYixian Jul 26, 2024
d0338e9
Update java/connector-node/risingwave-sink-mock-flink/risingwave-sink…
WanYixian Jul 26, 2024
2372eaf
Update java/connector-node/risingwave-sink-mock-flink/risingwave-sink…
WanYixian Jul 26, 2024
9c0c7ab
Update src/connector/src/sink/big_query.rs
WanYixian Jul 26, 2024
fa4e46a
Update src/connector/src/sink/big_query.rs
WanYixian Jul 26, 2024
208af72
Update src/connector/src/sink/deltalake.rs
WanYixian Jul 29, 2024
e82c7b6
Update src/connector/src/sink/iceberg/mod.rs
WanYixian Jul 29, 2024
5a75d92
Update src/connector/src/sink/mqtt.rs
WanYixian Jul 29, 2024
b94727e
Update src/connector/src/sink/mqtt.rs
WanYixian Jul 29, 2024
aeffaa7
Update src/connector/src/sink/mqtt.rs
WanYixian Jul 29, 2024
d8f24c7
Update src/connector/src/sink/nats.rs
WanYixian Jul 29, 2024
a99825c
Update src/connector/src/sink/nats.rs
WanYixian Jul 29, 2024
b9d07a8
fix cargo fmt
fuyufjh Aug 2, 2024
e488513
Merge remote-tracking branch 'origin/main' into wyx/review-error-message
fuyufjh Aug 2, 2024
c00d8fa
spotless apply
fuyufjh Aug 2, 2024
ef2f7fa
fix an error message
fuyufjh Aug 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion e2e_test/source/basic/ddl.slt
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ create source s (
properties.bootstrap.server = 'message_queue:29092'
) FORMAT PLAIN ENCODE JSON;

statement error properties `scan_startup_mode` only support earliest and latest or leave it empty
statement error properties `scan_startup_mode` only supports earliest and latest or leaving it empty
create source invalid_startup_mode (
column1 varchar
) with (
Expand Down
2 changes: 1 addition & 1 deletion e2e_test/source/basic/old_row_format_syntax/ddl.slt
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ create source s (
properties.bootstrap.server = 'message_queue:29092'
) ROW FORMAT JSON;

statement error properties `scan_startup_mode` only support earliest and latest or leave it empty
statement error properties `scan_startup_mode` only supports earliest and latest or leaving it empty
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
statement error properties `scan_startup_mode` only supports earliest and latest or leaving it empty
statement error property `scan_startup_mode` only accepts three options: earliest, latest, or left empty

create source invalid_startup_mode (
column1 varchar
) with (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ public static void checkSchema(
throw Status.FAILED_PRECONDITION
.withDescription(
String.format(
"Don't match in the name, rw is %s cassandra can't find it",
"Name mismatch. %s of RisingWave is not found in Cassandra.",
WanYixian marked this conversation as resolved.
Show resolved Hide resolved
columnDesc.getName()))
.asRuntimeException();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,14 +120,14 @@ public static void validateSchemaWithCatalog(
throw Status.FAILED_PRECONDITION
.withDescription(
String.format(
"Don't match in the name, rw is %s", columnDesc.getName()))
"Name mismatch. RisingWave is %s", columnDesc.getName()))
WanYixian marked this conversation as resolved.
Show resolved Hide resolved
.asRuntimeException();
}
if (!checkType(columnDesc.getDataType(), flinkColumnMap.get(columnDesc.getName()))) {
throw Status.FAILED_PRECONDITION
.withDescription(
String.format(
"Don't match in the type, name is %s, Sink is %s, rw is %s",
"Type mismatch. Name is %s, Sink is %s, RisingWave is %s",
WanYixian marked this conversation as resolved.
Show resolved Hide resolved
columnDesc.getName(),
flinkColumnMap.get(columnDesc.getName()),
columnDesc.getDataType().getTypeName()))
Expand Down
2 changes: 1 addition & 1 deletion src/connector/src/sink/log_store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ impl TruncateOffset {
} => {
if epoch != *offset_epoch {
bail!(
"new item epoch {} not match current chunk offset epoch {}",
"new item epoch {} does not match current chunk offset epoch {}",
epoch,
offset_epoch
);
Expand Down
6 changes: 3 additions & 3 deletions src/connector/src/sink/mqtt.rs
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ impl MqttConfig {
.map_err(|e| SinkError::Config(anyhow!(e)))?;
if config.r#type != SINK_TYPE_APPEND_ONLY {
Err(SinkError::Config(anyhow!(
"Mqtt sink only support append-only mode"
"Mqtt sink only supports append-only mode"
WanYixian marked this conversation as resolved.
Show resolved Hide resolved
)))
} else {
Ok(config)
Expand Down Expand Up @@ -175,7 +175,7 @@ impl Sink for MqttSink {
async fn validate(&self) -> Result<()> {
if !self.is_append_only {
return Err(SinkError::Mqtt(anyhow!(
"Mqtt sink only support append-only mode"
"Mqtt sink only supports append-only mode"
WanYixian marked this conversation as resolved.
Show resolved Hide resolved
)));
}

Expand Down Expand Up @@ -261,7 +261,7 @@ impl MqttSinkWriter {
},
_ => {
return Err(SinkError::Config(anyhow!(
"Mqtt sink only support append-only mode"
"Mqtt sink only supports append-only mode"
WanYixian marked this conversation as resolved.
Show resolved Hide resolved
)))
}
};
Expand Down
2 changes: 1 addition & 1 deletion src/connector/src/sink/nats.rs
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ impl NatsConfig {
.map_err(|e| SinkError::Config(anyhow!(e)))?;
if config.r#type != SINK_TYPE_APPEND_ONLY {
Err(SinkError::Config(anyhow!(
"Nats sink only support append-only mode"
"Nats sink only supports append-only mode"
WanYixian marked this conversation as resolved.
Show resolved Hide resolved
)))
} else {
Ok(config)
Expand Down
2 changes: 1 addition & 1 deletion src/connector/src/source/kafka/enumerator/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ impl SplitEnumerator for KafkaSplitEnumerator {
Some("latest") => KafkaEnumeratorOffset::Latest,
None => KafkaEnumeratorOffset::Earliest,
_ => bail!(
"properties `scan_startup_mode` only support earliest and latest or leave it empty"
"properties `scan_startup_mode` only supports earliest and latest or leaving it empty"
),
};

Expand Down
2 changes: 1 addition & 1 deletion src/connector/src/source/kinesis/source/reader.rs
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ impl KinesisSplitReader {
}
Err(e) => {
let error = anyhow!(e).context(format!(
"Kinesis got a unhandled error on stream {:?}, shard {:?}",
"Kinesis got an unhandled error on stream {:?}, shard {:?}",
self.stream_name, self.shard_id
));
tracing::error!(error = %error.as_report());
Expand Down
2 changes: 1 addition & 1 deletion src/connector/src/source/pulsar/enumerator/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ impl SplitEnumerator for PulsarSplitEnumerator {
None => PulsarEnumeratorOffset::Earliest,
_ => {
bail!(
"properties `startup_mode` only support earliest and latest or leave it empty"
"properties `startup_mode` only supports earliest and latest or leaving it empty"
);
}
};
Expand Down
2 changes: 1 addition & 1 deletion src/meta/src/controller/fragment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ impl CatalogController {
for mut actor in pb_actors {
let mut upstream_actors = BTreeMap::new();

let node = actor.nodes.as_mut().context("nodes is empty")?;
let node = actor.nodes.as_mut().context("nodes are empty")?;

visit_stream_node(node, |body| {
if let NodeBody::Merge(m) = body {
Expand Down
2 changes: 1 addition & 1 deletion src/meta/src/manager/catalog/database.rs
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ impl DatabaseManager {
&& x.name.eq(&relation_key.2)
}) {
if t.stream_job_status == StreamJobStatus::Creating as i32 {
bail!("table is in creating procedure, table id: {}", t.id);
bail!("Creating the table, table id: {}", t.id);
} else {
Err(MetaError::catalog_duplicated("table", &relation_key.2))
}
Expand Down
2 changes: 1 addition & 1 deletion src/meta/src/manager/catalog/fragment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ impl FragmentManager {
let map = &mut guard.table_fragments;
let table_id = table_fragment.table_id();
if map.contains_key(&table_id) {
bail!("table_fragment already exist: id={}", table_id);
bail!("table_fragment already exists: id={}", table_id);
}

let mut table_fragments = BTreeMapTransaction::new(map);
Expand Down
2 changes: 1 addition & 1 deletion src/meta/src/rpc/ddl_controller.rs
Original file line number Diff line number Diff line change
Expand Up @@ -386,7 +386,7 @@ impl DdlController {
.await?
.is_empty()
{
bail!("There are background creating jobs, please try again later")
bail!("The system is creating jobs in the background, please try again later")
}

self.stream_manager
Expand Down
4 changes: 2 additions & 2 deletions src/stream/src/executor/dynamic_filter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -403,8 +403,8 @@ impl<S: StateStore, const USE_WATERMARK_CACHE: bool> DynamicFilterExecutor<S, US
let (range, _latest_is_lower, is_insert) = self.get_range(&curr, prev);

if !is_insert && self.condition_always_relax {
bail!("The optimizer inferred that the right side's change always make the condition more relaxed.\
But the right changes make the conditions stricter.");
bail!("The optimizer incorrectly assumed that changes on the right side always relax the condition.\
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit hard to understand this message.

But they actually make it stricter.");
}

let range = (Self::to_row_bound(range.0), Self::to_row_bound(range.1));
Expand Down
2 changes: 1 addition & 1 deletion src/stream/src/executor/source/state_table_handler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ impl<S: StateStore> SourceStateTableHandler<S> {
) -> StreamExecutorResult<()> {
if states.is_empty() {
// TODO should be a clear Error Code
bail!("states require not null");
bail!("states should not be null");
} else {
for split in states {
self.set_complete(split.id(), split.encode_to_json())
Expand Down
2 changes: 1 addition & 1 deletion src/stream/src/executor/watermark_filter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ impl<S: StateStore> WatermarkFilterExecutor<S> {
if row.len() == 1 {
Ok::<_, StreamExecutorError>(row[0].to_owned())
} else {
bail!("The watermark row should only contains 1 datum");
bail!("The watermark row should only contain 1 datum");
}
}
_ => Ok(None),
Expand Down
Loading