Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic when using Iceberg Sink connector from materialized view #17296

Open
anubhavgupta2404 opened this issue Jun 18, 2024 · 22 comments · Fixed by #17478 · May be fixed by #17718
Open

panic when using Iceberg Sink connector from materialized view #17296

anubhavgupta2404 opened this issue Jun 18, 2024 · 22 comments · Fixed by #17478 · May be fixed by #17718
Assignees
Labels
type/bug Something isn't working user-feedback

Comments

@anubhavgupta2404
Copy link

Describe the bug

The sink connector to Iceberg table in hive catalog is throwing error. It is suppose to use the materialized view to sink data into Iceberg table, but risingwave is not able to support the mechanism.

Error message/log

SQL Error [XX000]: ERROR: Panicked when handling the request: called `Result::unwrap()` on an `Err` value: JavaException
This is a bug. We would appreciate a bug report at:
  https://github.com/risingwavelabs/risingwave/issues/new?labels=type%2Fbug&template=bug_report.yml

org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [XX000]: ERROR: Panicked when handling the request: called `Result::unwrap()` on an `Err` value: JavaException
This is a bug. We would appreciate a bug report at:
  https://github.com/risingwavelabs/risingwave/issues/new?labels=type%2Fbug&template=bug_report.yml
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:614)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$2(SQLQueryJob.java:505)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:191)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:524)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:976)
	at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:4133)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:123)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:191)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:121)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:5148)
	at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:114)
	at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: org.postgresql.util.PSQLException: ERROR: Panicked when handling the request: called `Result::unwrap()` on an `Err` value: JavaException
This is a bug. We would appreciate a bug report at:
  https://github.com/risingwavelabs/risingwave/issues/new?labels=type%2Fbug&template=bug_report.yml
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2725)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2412)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:371)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:502)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:419)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:341)
	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:326)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:302)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:297)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:131)
	... 12 more

To Reproduce

--------------Create a table that fetch the data fromo Google pubsub as a source
create table public.segment_impressions (
"anonymousId" varchar,
"context" jsonb,
"event" varchar,
"integrations" jsonb,
"messageId" varchar,
"originalTimestamp" varchar,
"properties" jsonb,
"receivedAt" varchar,
"sentAt" varchar,
"timestamp" varchar,
"type" varchar,
"writeKey" varchar
)
WITH (
connector = 'google_pubsub',
pubsub.subscription = 'risingwave-test',
pubsub.credentials = ''
) FORMAT PLAIN ENCODE JSON;

----------Create a materialized view that transforms the data flowing from the source
create materialized view public.segment_impression_event_mv
(appointment_id, auction_id, tvcDealerId, timestamp_ist)
as
select
replace((json_data->'appointment_id')::varchar,'"','') as appointment_id,
replace((json_data->'auction_id')::varchar,'"','') as auction_id,
tvcDealerId, timestamp_ist
from(
select jsonb_array_elements((properties->'car_info' #>> '{}')::jsonb) as json_data, replace((properties->'tvcDealerId')::varchar,'"','') as tvcDealerId ,
to_timestamp(substring(replace(timestamp,'T',' '),1,19),'YYYY-MM-DD HH24:MI:SS')::timestamp without time zone + INTERVAL '330 MINUTES' as timestamp_ist
from public.segment_impressions
where to_timestamp(substring(replace(timestamp,'T',' '),1,19),'YYYY-MM-DD HH24:MI:SS')::timestamp without time zone + INTERVAL '330 MINUTES' >= current_timestamp - interval '2 DAYS'
) as tbl;

---------------Sink the transformed data in materialized view to and Iceberg table in Hive catalog
CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv
WITH (
connector = 'iceberg',
type = 'append-only',
force_append_only = 'true',
warehouse.path = 's3a://hive-iceberg/segment_db/',
s3.endpoint = 'http://X.X.X.X',
s3.access.key = '',
s3.secret.key = '
************',
s3.region = 'asia-south1',
-- catalog.name = 'iceberg_hive_qa',
catalog.type = 'hive',
catalog.uri = 'thrift://X.X.X.X:9083',
database.name = 'segmentdb',
table.name = 'segment_impression_data_rw'
-- primary_key='seq_id'
);

---------------This is the part where the bug is while loading the MV data into Iceberg sink connector.

Expected behavior

I expected to see the data flowing from materialized view to and Iceberg table in hive catalog, instead it is not able to configure or read metadata of the iceberg table to setup the connection.
The I/O operation is failing and even if the command is syntactically correct the sink connector operation is failing with no proper error details.

How did you deploy RisingWave?

Deployed on GKE through operator. My risingwave-operator-system yaml file is:
apiVersion: v1
kind: Service
metadata:
name: risingwave-etcd
labels:
app: risingwave-etcd
spec:
ports:

  • port: 2388
    name: client
  • port: 2389
    name: peer
    selector:
    app: risingwave-etcd

apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: risingwave-etcd
name: risingwave-etcd
spec:
replicas: 1
selector:
matchLabels:
app: risingwave-etcd
serviceName: risingwave-etcd
volumeClaimTemplates:

  • metadata:
    name: etcd-data
    spec:
    accessModes: [ "ReadWriteOnce" ]
    resources:
    requests:
    storage: 10Gi
    persistentVolumeClaimRetentionPolicy:
    whenDeleted: Delete
    whenScaled: Retain
    template:
    metadata:
    labels:
    app: risingwave-etcd
    spec:
    nodeSelector:
    cloud.google.com/gke-nodepool: risingwave-pool
    containers:
    - name: etcd
    image: quay.io/coreos/etcd:latest
    imagePullPolicy: IfNotPresent
    command:
    - /usr/local/bin/etcd
    args:
    - "--listen-client-urls"
    - "http://0.0.0.0:2388/"
    - "--advertise-client-urls"
    - "http://risingwave-etcd-0:2388/"
    - "--listen-peer-urls"
    - "http://0.0.0.0:2389/"
    - "--initial-advertise-peer-urls"
    - "http://risingwave-etcd-0:2389/"
    - "--listen-metrics-urls"
    - "http://0.0.0.0:2379/"
    - "--name"
    - "risingwave-etcd"
    - "--max-txn-ops"
    - "999999"
    - "--max-request-bytes"
    - "104857600"
    - "--auto-compaction-mode"
    - periodic
    - "--auto-compaction-retention"
    - 1m
    - "--snapshot-count"
    - "10000"
    - --quota-backend-bytes
    - "8589934592"
    - --data-dir
    - /var/lib/etcd
    env:
    - name: ALLOW_NONE_AUTHENTICATION
    value: "1"
    ports:
    - containerPort: 2389
    name: peer
    protocol: TCP
    - containerPort: 2388
    name: client
    protocol: TCP
    volumeMounts:
    - mountPath: /var/lib/etcd
    name: etcd-data

apiVersion: v1
kind: Secret
metadata:
name: gcs-credentials
stringData:
ServiceAccountCredentials: ""

apiVersion: risingwave.risingwavelabs.com/v1alpha1
kind: RisingWave
metadata:
name: risingwave-etcd-gcs
spec:
metaStore:
etcd:
endpoint: risingwave-etcd:2388
stateStore:
gcs:
bucket: risingwave-test
root: risingwave
credentials:
secretName: gcs-credentials
serviceAccountCredentialsKeyRef: ServiceAccountCredentials
image: risingwavelabs/risingwave:v1.7.3
components:
meta:
nodeGroups:
- replicas: 1
name: ""
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: risingwave-pool
volumes:
- name: heap
emptyDir:
sizeLimit: 1Gi
volumeMounts:
- mountPath: /heap
name: heap
env:
- name: MALLOC_CONF
value: prof:true,lg_prof_interval=-1,lg_prof_sample=20,prof_prefix:/heap/
- name: RW_HEAP_PROFILING_DIR
value: /heap
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
frontend:
nodeGroups:
- replicas: 1
name: ""
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: risingwave-pool
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
compute:
nodeGroups:
- replicas: 1
name: ""
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: risingwave-pool
volumes:
- name: heap
emptyDir:
sizeLimit: 1Gi
volumeMounts:
- mountPath: /heap
name: heap
env:
- name: MALLOC_CONF
value: prof:true,lg_prof_interval=-1,lg_prof_sample=20,prof_prefix:/heap/
- name: RW_HEAP_PROFILING_DIR
value: /heap
resources:
limits:
cpu: 4
memory: 16Gi # Memory limit will be set to RW_TOTAL_MEMORY_BYTES
requests:
cpu: 4
memory: 16Gi
compactor:
nodeGroups:
- replicas: 1
name: ""
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: risingwave-pool
volumes:
- name: heap
emptyDir:
sizeLimit: 1Gi
volumeMounts:
- mountPath: /heap
name: heap
env:
- name: MALLOC_CONF
value: prof:true,lg_prof_interval=-1,lg_prof_sample=20,prof_prefix:/heap/
- name: RW_HEAP_PROFILING_DIR
value: /heap
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi

The version of RisingWave

PostgreSQL 9.5.0-RisingWave-1.7.3 (cfefe78)

Additional context

No response

@anubhavgupta2404 anubhavgupta2404 added the type/bug Something isn't working label Jun 18, 2024
@github-actions github-actions bot added this to the release-1.10 milestone Jun 18, 2024
@xxchan xxchan changed the title Iceberg Sink connector from materialized view panic when using Iceberg Sink connector from materialized view Jun 19, 2024
@xxchan
Copy link
Member

xxchan commented Jun 19, 2024

cc @chenzl25 @wenym1

@chenzl25
Copy link
Contributor

Thanks for the reporting @anubhavgupta2404. Could you please use our latest version v1.9.1 to try again?

@anubhavgupta2404
Copy link
Author

anubhavgupta2404 commented Jun 25, 2024

Hey @chenzl25

We have upgraded the version to PostgreSQL 13.14.0-RisingWave-1.9.1 (4fa6c8b) now.

But upon running the same script for creating iceberg sink connector using materialized view, I am now getting following error:

SQL Error [XX000]: ERROR: Failed to execute the statement
Caused by these errors (recent errors listed first):
1: connector error
2: Iceberg error
3: Unexpected => Failed to load iceberg table., source
4: Failed to load iceberg table: segmentdb.segment_impression_data_rw
5: Java exception was thrown

The detailed error message is as follow:

org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [XX000]: ERROR: Failed to execute the statement

Caused by these errors (recent errors listed first):
  1: connector error
  2: Iceberg error
  3: Unexpected => Failed to load iceberg table., source
  4: Failed to load iceberg table: segmentdb.segment_impression_data_rw
  5: Java exception was thrown

	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:614)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$2(SQLQueryJob.java:505)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:191)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:524)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:976)
	at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:4133)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:123)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:191)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:121)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:5148)
	at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:114)
	at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)

Caused by: org.postgresql.util.PSQLException: ERROR: Failed to execute the statement
Caused by these errors (recent errors listed first):
  1: connector error
  2: Iceberg error
  3: Unexpected => Failed to load iceberg table., source
  4: Failed to load iceberg table: segmentdb.segment_impression_data_rw
  5: Java exception was thrown

	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2725)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2412)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:371)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:502)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:419)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:341)
	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:326)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:302)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:297)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:131)
	... 12 more

@chenzl25
Copy link
Contributor

@anubhavgupta2404 Could you please provide the log (frontend and compute nodes) of RisingWave when this error occurred? Because the internal stack trace hasn't been reported to client.

@swapkh91
Copy link

@chenzl25
FE logs

2024-06-26T07:06:15.575571565Z ERROR handle_query{mode="extended query execute" session_id=11 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: HMSHandler Fatal error: MetaException(message:The class loader argument to this method cannot be null.)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:208)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:169)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:112)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Exception in thread "Thread-45" org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive Metastore
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:84)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
	at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
	at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
	at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:158)
	at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
	at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
	at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:47)
	at org.apache.iceberg.rest.CatalogHandlers.loadTable(CatalogHandlers.java:257)
	at com.risingwave.connector.catalog.JniCatalogWrapper.loadTable(JniCatalogWrapper.java:47)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:61)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:73)
	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:186)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
	at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
	at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
	at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:158)
	at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
	at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
	at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:47)
	at org.apache.iceberg.rest.CatalogHandlers.loadTable(CatalogHandlers.java:257)
	at com.risingwave.connector.catalog.JniCatalogWrapper.loadTable(JniCatalogWrapper.java:47)
Caused by: javax.jdo.JDOFatalUserException: The class loader argument to this method cannot be null.
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:799)
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:651)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:86)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:112)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:61)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:73)
	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:186)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:694)
	at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:484)
	... 11 more
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:421)
Caused by: java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
	... 23 more
Caused by: MetaException(message:The class loader argument to this method cannot be null.)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:169)
	... 28 more
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:376)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:79)
Caused by: MetaException(message:The class loader argument to this method cannot be null.)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:208)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:139)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:59)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:720)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:698)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:692)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:775)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
	at jdk.internal.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	... 33 more
 thread="Thread-45" class="org.apache.hadoop.hive.metastore.RetryingHMSHandler"
	... 31 more
Caused by: javax.jdo.JDOFatalUserException: The class loader argument to this method cannot be null.
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:799)
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:651)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:694)
	at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:484)
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:421)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:376)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:79)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:139)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:59)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:720)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:698)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:692)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:775)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
	at jdk.internal.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	... 33 more

Didn't find anything relevant in compute
what i noticed is Failed to connect to Hive Metastore. Not sure what the issue could be because we are using the same settings to connect to hive metastore through Starrocks

@chenzl25
Copy link
Contributor

chenzl25 commented Jun 27, 2024

@swapkh91 Thank you for the log provided. RisingWave uses JNI to deal with the hive catalog, while the error log seems it can't load the ContextClassLoader. I tried to open a PR #17478 to fix it. Could you please verify it, if possible? I can send you an image if necessary.

@swapkh91
Copy link

@chenzl25 sure, send me the image url, i'll deploy it and test

@swapkh91
Copy link

swapkh91 commented Jul 2, 2024

@chenzl25
i deployed that image, getting error while creating MV
it is working in 1.9.1
cc @anubhavgupta2404

SQL Error [XX000]: ERROR: Failed to execute the statement Caused by these errors (recent errors listed first): 1: gRPC request 
to meta service failed: Internal error 2: get error from control stream: worker node 2, gRPC request to stream service failed: 
Internal error: Storage error: Hummock error: Other error: failed sync task: ObjectStore failed with IO error: Unexpected 
(persistent) at Writer::write, context: { uri: https://storage.googleapis.com/upload/storage/v1/b/risingwave-test-buck/o?
uploadType=resumable&name=risingwave_new/hummock/69/2.data&upload_id=ACJd0NoCyHUj36WSabDeZss8AWOwwC-
oo8mNI-zshEntQ8DJWSmMxtXIFZALQUbg27l9dx7S-iWx7eusBEHrcNzhLtWXrlNDfHyjALb-M6DDbw0E6NM, response: Parts 
{ status: 503, version: HTTP/1.1, headers: {"content-type": "text/plain; charset=utf-8", "content-length": "152", "date": "Tue, 02 
Jul 2024 11:16:24 GMT", "server": "UploadServer", "connection": "close"} }, service: gcs, path: risingwave_new/hummock/
69/2.data, size: 16786865, written: 168070278 } => Invalid request. According to the Content-Range header, the upload offset 
is 16832842 byte(s), which exceeds already uploaded size of 16777216 byte(s). Backtrace: 0: ::capture at ./root/.cargo/registry/
src/index.crates.io-6f17d22bba15001f/thiserror-ext-0.1.2/src/backtrace.rs:30:18 1: thiserror_ext::ptr::ErrorBox::new at ./
root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/thiserror-ext-0.1.2/src/ptr.rs:40:33 2: 
<risingwave_object_store::object::error::objecterror as core::convert::from>::from at ./risingwave/src/object_store/src/object/
error.rs:26:45 3: <t as core::convert::into>::into at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/
convert/mod.rs:759:9 4: 
risingwave_object_store::object::opendal_engine::opendal_object_store::OpendalStreamingUploader::flush::{{closure}} at ./
risingwave/src/object_store/src/object/opendal_engine/opendal_object_store.rs:371:24 5: ::write_bytes::{{closure}} at ./
risingwave/src/object_store/src/object/opendal_engine/opendal_object_store.rs:384:26 6: <await_tree::future::instrumented as 
core::future::future::Future>::poll at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/await-tree-0.2.1/src/
future.rs:119:15 7: risingwave_object_store::object::MonitoredStreamingUploader::write_bytes::{{closure}} at ./risingwave/src/
object_store/src/object/mod.rs:394:18 8: 
risingwave_object_store::object::ObjectStoreEnum<risingwave_object_store::object::monitoredstreaminguploader::StreamingU
ploader>,risingwave_object_store::object::MonitoredStreamingUploader::StreamingUploader>,risingwave_object_store::object:
:MonitoredStreamingUploader::StreamingUploader>>::write_bytes::{{closure}} at ./risingwave/src/object_store/src/object/
mod.rs:269:9 9: ::write_block::{{closure}} at ./risingwave/src/storage/src/hummock/sstable_store.rs:956:14 10: <core::pin::pin
as core::future::future::Future>::poll at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/future/
future.rs:123:9 11: ::write_block::{{closure}} at ./risingwave/src/storage/src/hummock/sstable_store.rs:1096:54 12: 
<core::pin::pin
as core::future::future::Future>::poll at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/future/
future.rs:123:9 13: risingwave_storage::hummock::sstable::builder::SstableBuilder::build_block::{{closure}} at ./risingwave/src/
storage/src/hummock/sstable/builder.rs:562:52 14: risingwave_storage::hummock::sstable::builder::SstableBuilder::add_impl::
{{closure}} at ./risingwave/src/storage/src/hummock/sstable/builder.rs:304:32 15: 
risingwave_storage::hummock::sstable::builder::SstableBuilder::add::{{closure}} at ./risingwave/src/storage/src/hummock/
sstable/builder.rs:244:46 16: risingwave_storage::hummock::sstable::multi_builder::CapacitySplitTableBuilder::add_full_key::
{{closure}} at ./risingwave/src/storage/src/hummock/sstable/multi_builder.rs:209:38 17: <await_tree::future::instrumented as 
core::future::future::Future>::poll at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/await-tree-0.2.1/src/
future.rs:119:15 18: risingwave_storage::hummock::compactor::compactor_runner::compact_and_build_sst::{{closure}} at ./
risingwave/src/storage/src/hummock/compactor/compactor_runner.rs:777:18 19: <await_tree::future::instrumented as 
core::future::future::Future>::poll at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/await-tree-0.2.1/src/
future.rs:119:15 20: risingwave_storage::hummock::compactor::Compactor::compact_key_range_impl::{{closure}} at ./
risingwave/src/storage/src/hummock/compactor/mod.rs:262:10 21: <await_tree::future::instrumented as 
core::future::future::Future>::poll at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/await-tree-0.2.1/src/
future.rs:119:15 22: risingwave_storage::hummock::compactor::Compactor::compact_key_range::{{closure}} at ./risingwave/src/
storage/src/hummock/compactor/mod.rs:171:18 23: 
risingwave_storage::hummock::compactor::shared_buffer_compact::SharedBufferCompactRunner::run::{{closure}} at ./
risingwave/src/storage/src/hummock/compactor/shared_buffer_compact.rs:599:14 24: <tokio::task::task_local::tasklocalfuture 
as core::future::future::Future>::poll::{{closure}} at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/
task/task_local.rs:391:31 25: tokio::task::task_local::LocalKey::scope_inner at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/task/task_local.rs:217:19 26: <tokio::task::task_local::tasklocalfuture as 
core::future::future::Future>::poll at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/task/
task_local.rs:387:19 27: await_tree::root::TreeRoot::instrument::{{closure}} at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/await-tree-0.2.1/src/root.rs:43:34 28: <futures_util::future::either::either as 
core::future::future::Future>::poll at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.30/src/future/
either.rs:109:32 29: <tracing::instrument::instrumented as core::future::future::Future>::poll at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tracing-0.1.40/src/instrument.rs:321:9 30: tokio::runtime::task::core::Core::poll::{{closure}} 
at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17 31: 
tokio::loom::std::unsafe_cell::UnsafeCell::with_mut at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/
src/loom/std/unsafe_cell.rs:16:9 32: tokio::runtime::task::core::Core::poll at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:30 33: tokio::runtime::task::harness::poll_future::
{{closure}} at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19 34: 
<core::panic::unwind_safe::assertunwindsafe as core::ops::function::FnOnce>::call_once at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9 35: std::panicking::try::do_call 
at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40 36: std::panicking::try at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19 37: std::panic::catch_unwind at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14 38: tokio::runtime::task::harness::poll_future 
at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18 39: 
tokio::runtime::task::harness::Harness::poll_inner at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/
src/runtime/task/harness.rs:208:27 40: tokio::runtime::task::harness::Harness::poll at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15 41: tokio::runtime::task::raw::RawTask::poll 
at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18 42: 
tokio::runtime::task::LocalNotified::run at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/
task/mod.rs:427:9 43: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}} at ./root/.cargo/registry/
src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:576:18 44: 
tokio::runtime::coop::with_budget at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/
coop.rs:107:5 45: tokio::runtime::coop::budget at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/
runtime/coop.rs:73:5 46: tokio::runtime::scheduler::multi_thread::worker::Context::run_task at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:575:9 47: 
tokio::runtime::scheduler::multi_thread::worker::Context::run at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/
tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:526:24 48: tokio::runtime::scheduler::multi_thread::worker::run::
{{closure}}::{{closure}} at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/
multi_thread/worker.rs:491:21 49: tokio::runtime::context::scoped::Scoped::set at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context/scoped.rs:40:9 50: tokio::runtime::context::set_scheduler::
{{closure}} at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context.rs:176:26 51: 
std::thread::local::LocalKey::try_with at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/
local.rs:284:16 52: std::thread::local::LocalKey::with at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/
thread/local.rs:260:9 53: tokio::runtime::context::set_scheduler at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/
tokio-1.37.0/src/runtime/context.rs:176:17 54: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}} at ./root/.cargo/
registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:486:9 55: 
tokio::runtime::context::runtime::enter_runtime at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/
runtime/context/runtime.rs:65:16 56: tokio::runtime::scheduler::multi_thread::worker::run at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:478:5 57: 
tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}} at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:447:45 58: 
<tokio::runtime::blocking::task::blockingtask as core::future::future::Future>::poll at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/task.rs:42:21 59: <tracing::instrument::instrumented as 
core::future::future::Future>::poll at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tracing-0.1.40/src/
instrument.rs:321:9 60: tokio::runtime::task::core::Core::poll::{{closure}} at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17 61: 
tokio::loom::std::unsafe_cell::UnsafeCell::with_mut at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/
src/loom/std/unsafe_cell.rs:16:9 62: tokio::runtime::task::core::Core::poll at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:30 63: tokio::runtime::task::harness::poll_future::
{{closure}} at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19 64: 
<core::panic::unwind_safe::assertunwindsafe as core::ops::function::FnOnce>::call_once at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9 65: std::panicking::try::do_call 
at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40 66: std::panicking::try at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19 67: std::panic::catch_unwind at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14 68: tokio::runtime::task::harness::poll_future 
at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18 69: 
tokio::runtime::task::harness::Harness::poll_inner at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/
src/runtime/task/harness.rs:208:27 70: tokio::runtime::task::harness::Harness::poll at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15 71: tokio::runtime::task::raw::RawTask::poll 
at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18 72: 
tokio::runtime::task::UnownedTask::run at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/
task/mod.rs:464:9 73: tokio::runtime::blocking::pool::Task::run at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/
tokio-1.37.0/src/runtime/blocking/pool.rs:159:9 74: tokio::runtime::blocking::pool::Inner::run at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:513:17 75: 
tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}} at ./root/.cargo/registry/src/
index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:471:13 76: 
std::sys_common::backtrace::__rust_begin_short_backtrace at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/
std/src/sys_common/backtrace.rs:155:18 77: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}} at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/mod.rs:528:17 78: 
<core::panic::unwind_safe::assertunwindsafe as core::ops::function::FnOnce>::call_once at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9 79: std::panicking::try::do_call 
at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40 80: std::panicking::try at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19 81: std::panic::catch_unwind at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14 82: std::thread::Builder::spawn_unchecked_::
{{closure}} at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/mod.rs:527:30 83: 
core::ops::function::FnOnce::call_once{{vtable.shim}} at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/
src/ops/function.rs:250:5 84: <alloc::boxed::box as core::ops::function::FnOnce>::call_once at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/alloc/src/boxed.rs:2020:9 85: <alloc::boxed::box as 
core::ops::function::FnOnce>::call_once at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/alloc/src/
boxed.rs:2020:9 86: std::sys::pal::unix::thread::Thread::new::thread_start at ./rustc/
4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/sys/pal/unix/thread.rs:108:17 87: start_thread at ./nptl/
pthread_create.c:447:8 88: __GI___clone3 at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 ;

@chenzl25
Copy link
Contributor

chenzl25 commented Jul 2, 2024

@wenym1 Did you have any idea? the upload offset is 16832842 byte(s), which exceeds already uploaded size of 16777216 byte(s)

@swapkh91
Copy link

swapkh91 commented Jul 5, 2024

@chenzl25 if this is fixed then which release should I use to test?

@chenzl25 chenzl25 reopened this Jul 5, 2024
@chenzl25
Copy link
Contributor

chenzl25 commented Jul 5, 2024

@chenzl25 if this is fixed then which release should I use to test?

Sorry, the issue is closed by the related PR automatically. The fix would be included in the next version i.e. 1.11.0

Every day, we would release a nightly docker image. You can use it to test if this fix is urgent to use. The current latest nightly image is nightly-20240704
https://github.com/risingwavelabs/risingwave/pkgs/container/risingwave/239312142?tag=nightly-20240704

@swapkh91
Copy link

swapkh91 commented Jul 5, 2024

@chenzl25 if this is fixed then which release should I use to test?

Sorry, the issue is closed by the released PR automatically. The fix would be included in the next version i.e. 1.11.0

Every day, we would release a nightly docker image. You can use it to test if this bug is urgent to use. Current latest nightly image is nightly-20240704 https://github.com/risingwavelabs/risingwave/pkgs/container/risingwave/239312142?tag=nightly-20240704

sure @chenzl25, I'll try

@swapkh91
Copy link

swapkh91 commented Jul 5, 2024

@chenzl25 i changed the image in my yaml and re-applied
previously i was on 1.9.1
changed it to ghcr.io/risingwavelabs/risingwave:nightly-20240704
getting the below error in meta

2024-07-05T08:47:30.617195198Z  WARN risingwave_meta::barrier::rpc: get error from response stream node=WorkerNode { id: 5, r#type: ComputeNode, host: Some(HostAddress { host: "risingwave-etcd-gcs-compute-0.risingwave-etcd-gcs-compute", port: 5688 }), state: Running, parallel_units: [ParallelUnit { id: 8, worker_node_id: 5 }, ParallelUnit { id: 9, worker_node_id: 5 }, ParallelUnit { id: 10, worker_node_id: 5 }, ParallelUnit { id: 11, worker_node_id: 5 }], property: Some(Property { is_streaming: true, is_serving: true, is_unschedulable: false }), transactional_id: Some(3), resource: None, started_at: None } err=gRPC request to stream service failed: Internal error: Storage error: Hummock error: Other error: failed sync task: Meta error: gRPC request to meta service failed: The service is currently unavailable: transport error
INFO 2024-07-05T08:47:30.617679258Z [resource.labels.containerName: meta] Backtrace:
INFO 2024-07-05T08:47:30.617684138Z [resource.labels.containerName: meta] 0: <thiserror_ext::backtrace::MaybeBacktrace as thiserror_ext::backtrace::WithBacktrace>::capture
INFO 2024-07-05T08:47:30.617688638Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/thiserror-ext-0.1.2/src/backtrace.rs:30:18
INFO 2024-07-05T08:47:30.617699078Z [resource.labels.containerName: meta] 1: thiserror_ext::ptr::ErrorBox<T,B>::new
INFO 2024-07-05T08:47:30.617702438Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/thiserror-ext-0.1.2/src/ptr.rs:40:33
INFO 2024-07-05T08:47:30.617707627Z [resource.labels.containerName: meta] 2: <risingwave_storage::hummock::error::HummockError as core::convert::From<E>>::from
INFO 2024-07-05T08:47:30.617714198Z [resource.labels.containerName: meta] at ./risingwave/src/storage/src/hummock/error.rs:22:45
INFO 2024-07-05T08:47:30.617719347Z [resource.labels.containerName: meta] 3: <T as core::convert::Into<U>>::into
INFO 2024-07-05T08:47:30.617725227Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/convert/mod.rs:759:9
INFO 2024-07-05T08:47:30.617731387Z [resource.labels.containerName: meta] 4: risingwave_storage::hummock::error::HummockError::meta_error at ./risingwave/src/storage/src/hummock/error.rs:101:57
INFO 2024-07-05T08:47:30.617742818Z [resource.labels.containerName: meta] 5: core::ops::function::FnOnce::call_once
INFO 2024-07-05T08:47:30.617748278Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/ops/function.rs:250:5
INFO 2024-07-05T08:47:30.617753038Z [resource.labels.containerName: meta] 6: core::result::Result<T,E>::map_err
INFO 2024-07-05T08:47:30.617758198Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/result.rs:829:27
INFO 2024-07-05T08:47:30.617764678Z [resource.labels.containerName: meta] 7: risingwave_storage::hummock::sstable::sstable_object_id_manager::SstableObjectIdManager::map_next_sst_object_id::{{closure}}::{{closure}}
INFO 2024-07-05T08:47:30.617769518Z [resource.labels.containerName: meta] at ./risingwave/src/storage/src/hummock/sstable/sstable_object_id_manager.rs:109:41
INFO 2024-07-05T08:47:30.617773038Z [resource.labels.containerName: meta] 8: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
INFO 2024-07-05T08:47:30.617776398Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tracing-0.1.40/src/instrument.rs:321:9
INFO 2024-07-05T08:47:30.617780107Z [resource.labels.containerName: meta] 9: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
INFO 2024-07-05T08:47:30.617783447Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17
INFO 2024-07-05T08:47:30.617786738Z [resource.labels.containerName: meta] 10: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
INFO 2024-07-05T08:47:30.617790058Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/loom/std/unsafe_cell.rs:16:9
INFO 2024-07-05T08:47:30.617793367Z [resource.labels.containerName: meta] 11: tokio::runtime::task::core::Core<T,S>::poll
INFO 2024-07-05T08:47:30.617796918Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:30
INFO 2024-07-05T08:47:30.617800287Z [resource.labels.containerName: meta] 12: tokio::runtime::task::harness::poll_future::{{closure}}
INFO 2024-07-05T08:47:30.617803587Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19
INFO 2024-07-05T08:47:30.617806947Z [resource.labels.containerName: meta] 13: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
INFO 2024-07-05T08:47:30.617820757Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9
INFO 2024-07-05T08:47:30.617826477Z [resource.labels.containerName: meta] 14: std::panicking::try::do_call
INFO 2024-07-05T08:47:30.617832337Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40
INFO 2024-07-05T08:47:30.617839087Z [resource.labels.containerName: meta] 15: std::panicking::try
INFO 2024-07-05T08:47:30.617844237Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19
INFO 2024-07-05T08:47:30.617848877Z [resource.labels.containerName: meta] 16: std::panic::catch_unwind
INFO 2024-07-05T08:47:30.617853347Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14
INFO 2024-07-05T08:47:30.617857897Z [resource.labels.containerName: meta] 17: tokio::runtime::task::harness::poll_future
INFO 2024-07-05T08:47:30.617862977Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18
INFO 2024-07-05T08:47:30.617868127Z [resource.labels.containerName: meta] 18: tokio::runtime::task::harness::Harness<T,S>::poll_inner
INFO 2024-07-05T08:47:30.617894437Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:208:27
INFO 2024-07-05T08:47:30.617912637Z [resource.labels.containerName: meta] 19: tokio::runtime::task::harness::Harness<T,S>::poll
INFO 2024-07-05T08:47:30.617918917Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15
INFO 2024-07-05T08:47:30.617923747Z [resource.labels.containerName: meta] 20: tokio::runtime::task::raw::RawTask::poll
INFO 2024-07-05T08:47:30.617928297Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18
INFO 2024-07-05T08:47:30.617933057Z [resource.labels.containerName: meta] 21: tokio::runtime::task::LocalNotified<S>::run
INFO 2024-07-05T08:47:30.617937637Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/mod.rs:427:9
INFO 2024-07-05T08:47:30.617942827Z [resource.labels.containerName: meta] 22: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
INFO 2024-07-05T08:47:30.617947887Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:576:18
INFO 2024-07-05T08:47:30.617952537Z [resource.labels.containerName: meta] 23: tokio::runtime::coop::with_budget
INFO 2024-07-05T08:47:30.617957057Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/coop.rs:107:5
INFO 2024-07-05T08:47:30.617962277Z [resource.labels.containerName: meta] 24: tokio::runtime::coop::budget
INFO 2024-07-05T08:47:30.617967107Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/coop.rs:73:5
INFO 2024-07-05T08:47:30.617972067Z [resource.labels.containerName: meta] 25: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
INFO 2024-07-05T08:47:30.617975397Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:575:9
INFO 2024-07-05T08:47:30.617978747Z [resource.labels.containerName: meta] 26: tokio::runtime::scheduler::multi_thread::worker::Context::run
INFO 2024-07-05T08:47:30.617983377Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:526:24
INFO 2024-07-05T08:47:30.617988777Z [resource.labels.containerName: meta] 27: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
INFO 2024-07-05T08:47:30.617993657Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:491:21
INFO 2024-07-05T08:47:30.617999187Z [resource.labels.containerName: meta] 28: tokio::runtime::context::scoped::Scoped<T>::set
INFO 2024-07-05T08:47:30.618004637Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context/scoped.rs:40:9
INFO 2024-07-05T08:47:30.618019067Z [resource.labels.containerName: meta] 29: tokio::runtime::context::set_scheduler::{{closure}}
INFO 2024-07-05T08:47:30.618023817Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context.rs:176:26
INFO 2024-07-05T08:47:30.618028277Z [resource.labels.containerName: meta] 30: std::thread::local::LocalKey<T>::try_with
INFO 2024-07-05T08:47:30.618032887Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/local.rs:284:16
INFO 2024-07-05T08:47:30.618037457Z [resource.labels.containerName: meta] 31: std::thread::local::LocalKey<T>::with
INFO 2024-07-05T08:47:30.618042087Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/local.rs:260:9
INFO 2024-07-05T08:47:30.618046557Z [resource.labels.containerName: meta] 32: tokio::runtime::context::set_scheduler
INFO 2024-07-05T08:47:30.618051107Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context.rs:176:17
INFO 2024-07-05T08:47:30.618055577Z [resource.labels.containerName: meta] 33: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
INFO 2024-07-05T08:47:30.618060827Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:486:9
INFO 2024-07-05T08:47:30.618065417Z [resource.labels.containerName: meta] 34: tokio::runtime::context::runtime::enter_runtime
INFO 2024-07-05T08:47:30.618070377Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context/runtime.rs:65:16
INFO 2024-07-05T08:47:30.618075437Z [resource.labels.containerName: meta] 35: tokio::runtime::scheduler::multi_thread::worker::run
INFO 2024-07-05T08:47:30.618080337Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:478:5
INFO 2024-07-05T08:47:30.618085797Z [resource.labels.containerName: meta] 36: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
INFO 2024-07-05T08:47:30.618090877Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:447:45
INFO 2024-07-05T08:47:30.618095977Z [resource.labels.containerName: meta] 37: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
INFO 2024-07-05T08:47:30.618101437Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/task.rs:42:21
INFO 2024-07-05T08:47:30.618107007Z [resource.labels.containerName: meta] 38: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
INFO 2024-07-05T08:47:30.618112747Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tracing-0.1.40/src/instrument.rs:321:9
INFO 2024-07-05T08:47:30.618117917Z [resource.labels.containerName: meta] 39: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
INFO 2024-07-05T08:47:30.618122977Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17
INFO 2024-07-05T08:47:30.618127647Z [resource.labels.containerName: meta] 40: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
INFO 2024-07-05T08:47:30.618142487Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/loom/std/unsafe_cell.rs:16:9
INFO 2024-07-05T08:47:30.618147537Z [resource.labels.containerName: meta] 41: tokio::runtime::task::core::Core<T,S>::poll
INFO 2024-07-05T08:47:30.618152107Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:30
INFO 2024-07-05T08:47:30.618156737Z [resource.labels.containerName: meta] 42: tokio::runtime::task::harness::poll_future::{{closure}}
INFO 2024-07-05T08:47:30.618161597Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19
INFO 2024-07-05T08:47:30.618166837Z [resource.labels.containerName: meta] 43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
INFO 2024-07-05T08:47:30.618172007Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9
INFO 2024-07-05T08:47:30.618177257Z [resource.labels.containerName: meta] 44: std::panicking::try::do_call
INFO 2024-07-05T08:47:30.618182747Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40
INFO 2024-07-05T08:47:30.618196707Z [resource.labels.containerName: meta] 45: std::panicking::try
INFO 2024-07-05T08:47:30.618202487Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19
INFO 2024-07-05T08:47:30.618207807Z [resource.labels.containerName: meta] 46: std::panic::catch_unwind
INFO 2024-07-05T08:47:30.618212867Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14
INFO 2024-07-05T08:47:30.618217507Z [resource.labels.containerName: meta] 47: tokio::runtime::task::harness::poll_future
INFO 2024-07-05T08:47:30.618221967Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18
INFO 2024-07-05T08:47:30.618226627Z [resource.labels.containerName: meta] 48: tokio::runtime::task::harness::Harness<T,S>::poll_inner
INFO 2024-07-05T08:47:30.618231177Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:208:27
INFO 2024-07-05T08:47:30.618235627Z [resource.labels.containerName: meta] 49: tokio::runtime::task::harness::Harness<T,S>::poll
INFO 2024-07-05T08:47:30.618240197Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15
INFO 2024-07-05T08:47:30.618244747Z [resource.labels.containerName: meta] 50: tokio::runtime::task::raw::RawTask::poll
INFO 2024-07-05T08:47:30.618249187Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18
INFO 2024-07-05T08:47:30.618253717Z [resource.labels.containerName: meta] 51: tokio::runtime::task::UnownedTask<S>::run
INFO 2024-07-05T08:47:30.618258057Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/mod.rs:464:9
INFO 2024-07-05T08:47:30.618262737Z [resource.labels.containerName: meta] 52: tokio::runtime::blocking::pool::Task::run
INFO 2024-07-05T08:47:30.618267907Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:159:9
INFO 2024-07-05T08:47:30.618272967Z [resource.labels.containerName: meta] 53: tokio::runtime::blocking::pool::Inner::run
INFO 2024-07-05T08:47:30.618277947Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:513:17
INFO 2024-07-05T08:47:30.618283417Z [resource.labels.containerName: meta] 54: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
INFO 2024-07-05T08:47:30.618288147Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:471:13
INFO 2024-07-05T08:47:30.618293507Z [resource.labels.containerName: meta] 55: std::sys_common::backtrace::__rust_begin_short_backtrace
INFO 2024-07-05T08:47:30.618298807Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/sys_common/backtrace.rs:155:18
INFO 2024-07-05T08:47:30.618304187Z [resource.labels.containerName: meta] 56: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
INFO 2024-07-05T08:47:30.618309347Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/mod.rs:528:17
INFO 2024-07-05T08:47:30.618313767Z [resource.labels.containerName: meta] 57: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
INFO 2024-07-05T08:47:30.618318257Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9
INFO 2024-07-05T08:47:30.618322697Z [resource.labels.containerName: meta] 58: std::panicking::try::do_call
INFO 2024-07-05T08:47:30.618327157Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40
INFO 2024-07-05T08:47:30.618331587Z [resource.labels.containerName: meta] 59: std::panicking::try
INFO 2024-07-05T08:47:30.618336057Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19
INFO 2024-07-05T08:47:30.618340487Z [resource.labels.containerName: meta] 60: std::panic::catch_unwind
INFO 2024-07-05T08:47:30.618344967Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14
INFO 2024-07-05T08:47:30.618349507Z [resource.labels.containerName: meta] 61: std::thread::Builder::spawn_unchecked_::{{closure}}
INFO 2024-07-05T08:47:30.618361347Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/mod.rs:527:30
INFO 2024-07-05T08:47:30.618366287Z [resource.labels.containerName: meta] 62: core::ops::function::FnOnce::call_once{{vtable.shim}}
INFO 2024-07-05T08:47:30.618371307Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/ops/function.rs:250:5
INFO 2024-07-05T08:47:30.618376967Z [resource.labels.containerName: meta] 63: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
INFO 2024-07-05T08:47:30.618381907Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/alloc/src/boxed.rs:2020:9
INFO 2024-07-05T08:47:30.618387017Z [resource.labels.containerName: meta] 64: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
INFO 2024-07-05T08:47:30.618409617Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/alloc/src/boxed.rs:2020:9
INFO 2024-07-05T08:47:30.618415317Z [resource.labels.containerName: meta] 65: std::sys::pal::unix::thread::Thread::new::thread_start
INFO 2024-07-05T08:47:30.618420207Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/sys/pal/unix/thread.rs:108:17
INFO 2024-07-05T08:47:30.618425017Z [resource.labels.containerName: meta] 66: start_thread
INFO 2024-07-05T08:47:30.618429607Z [resource.labels.containerName: meta] at ./nptl/pthread_create.c:447:8
INFO 2024-07-05T08:47:30.618434077Z [resource.labels.containerName: meta] 67: __GI___clone3
INFO 2024-07-05T08:47:30.618438637Z [resource.labels.containerName: meta] at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

@swapkh91
Copy link

swapkh91 commented Jul 5, 2024

@chenzl25 i changed the image in my yaml and re-applied previously i was on 1.9.1 changed it to ghcr.io/risingwavelabs/risingwave:nightly-20240704 getting the below error in meta

2024-07-05T08:47:30.617195198Z  WARN risingwave_meta::barrier::rpc: get error from response stream node=WorkerNode { id: 5, r#type: ComputeNode, host: Some(HostAddress { host: "risingwave-etcd-gcs-compute-0.risingwave-etcd-gcs-compute", port: 5688 }), state: Running, parallel_units: [ParallelUnit { id: 8, worker_node_id: 5 }, ParallelUnit { id: 9, worker_node_id: 5 }, ParallelUnit { id: 10, worker_node_id: 5 }, ParallelUnit { id: 11, worker_node_id: 5 }], property: Some(Property { is_streaming: true, is_serving: true, is_unschedulable: false }), transactional_id: Some(3), resource: None, started_at: None } err=gRPC request to stream service failed: Internal error: Storage error: Hummock error: Other error: failed sync task: Meta error: gRPC request to meta service failed: The service is currently unavailable: transport error
INFO 2024-07-05T08:47:30.617679258Z [resource.labels.containerName: meta] Backtrace:
INFO 2024-07-05T08:47:30.617684138Z [resource.labels.containerName: meta] 0: <thiserror_ext::backtrace::MaybeBacktrace as thiserror_ext::backtrace::WithBacktrace>::capture
INFO 2024-07-05T08:47:30.617688638Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/thiserror-ext-0.1.2/src/backtrace.rs:30:18
INFO 2024-07-05T08:47:30.617699078Z [resource.labels.containerName: meta] 1: thiserror_ext::ptr::ErrorBox<T,B>::new
INFO 2024-07-05T08:47:30.617702438Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/thiserror-ext-0.1.2/src/ptr.rs:40:33
INFO 2024-07-05T08:47:30.617707627Z [resource.labels.containerName: meta] 2: <risingwave_storage::hummock::error::HummockError as core::convert::From<E>>::from
INFO 2024-07-05T08:47:30.617714198Z [resource.labels.containerName: meta] at ./risingwave/src/storage/src/hummock/error.rs:22:45
INFO 2024-07-05T08:47:30.617719347Z [resource.labels.containerName: meta] 3: <T as core::convert::Into<U>>::into
INFO 2024-07-05T08:47:30.617725227Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/convert/mod.rs:759:9
INFO 2024-07-05T08:47:30.617731387Z [resource.labels.containerName: meta] 4: risingwave_storage::hummock::error::HummockError::meta_error at ./risingwave/src/storage/src/hummock/error.rs:101:57
INFO 2024-07-05T08:47:30.617742818Z [resource.labels.containerName: meta] 5: core::ops::function::FnOnce::call_once
INFO 2024-07-05T08:47:30.617748278Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/ops/function.rs:250:5
INFO 2024-07-05T08:47:30.617753038Z [resource.labels.containerName: meta] 6: core::result::Result<T,E>::map_err
INFO 2024-07-05T08:47:30.617758198Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/result.rs:829:27
INFO 2024-07-05T08:47:30.617764678Z [resource.labels.containerName: meta] 7: risingwave_storage::hummock::sstable::sstable_object_id_manager::SstableObjectIdManager::map_next_sst_object_id::{{closure}}::{{closure}}
INFO 2024-07-05T08:47:30.617769518Z [resource.labels.containerName: meta] at ./risingwave/src/storage/src/hummock/sstable/sstable_object_id_manager.rs:109:41
INFO 2024-07-05T08:47:30.617773038Z [resource.labels.containerName: meta] 8: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
INFO 2024-07-05T08:47:30.617776398Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tracing-0.1.40/src/instrument.rs:321:9
INFO 2024-07-05T08:47:30.617780107Z [resource.labels.containerName: meta] 9: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
INFO 2024-07-05T08:47:30.617783447Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17
INFO 2024-07-05T08:47:30.617786738Z [resource.labels.containerName: meta] 10: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
INFO 2024-07-05T08:47:30.617790058Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/loom/std/unsafe_cell.rs:16:9
INFO 2024-07-05T08:47:30.617793367Z [resource.labels.containerName: meta] 11: tokio::runtime::task::core::Core<T,S>::poll
INFO 2024-07-05T08:47:30.617796918Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:30
INFO 2024-07-05T08:47:30.617800287Z [resource.labels.containerName: meta] 12: tokio::runtime::task::harness::poll_future::{{closure}}
INFO 2024-07-05T08:47:30.617803587Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19
INFO 2024-07-05T08:47:30.617806947Z [resource.labels.containerName: meta] 13: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
INFO 2024-07-05T08:47:30.617820757Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9
INFO 2024-07-05T08:47:30.617826477Z [resource.labels.containerName: meta] 14: std::panicking::try::do_call
INFO 2024-07-05T08:47:30.617832337Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40
INFO 2024-07-05T08:47:30.617839087Z [resource.labels.containerName: meta] 15: std::panicking::try
INFO 2024-07-05T08:47:30.617844237Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19
INFO 2024-07-05T08:47:30.617848877Z [resource.labels.containerName: meta] 16: std::panic::catch_unwind
INFO 2024-07-05T08:47:30.617853347Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14
INFO 2024-07-05T08:47:30.617857897Z [resource.labels.containerName: meta] 17: tokio::runtime::task::harness::poll_future
INFO 2024-07-05T08:47:30.617862977Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18
INFO 2024-07-05T08:47:30.617868127Z [resource.labels.containerName: meta] 18: tokio::runtime::task::harness::Harness<T,S>::poll_inner
INFO 2024-07-05T08:47:30.617894437Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:208:27
INFO 2024-07-05T08:47:30.617912637Z [resource.labels.containerName: meta] 19: tokio::runtime::task::harness::Harness<T,S>::poll
INFO 2024-07-05T08:47:30.617918917Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15
INFO 2024-07-05T08:47:30.617923747Z [resource.labels.containerName: meta] 20: tokio::runtime::task::raw::RawTask::poll
INFO 2024-07-05T08:47:30.617928297Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18
INFO 2024-07-05T08:47:30.617933057Z [resource.labels.containerName: meta] 21: tokio::runtime::task::LocalNotified<S>::run
INFO 2024-07-05T08:47:30.617937637Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/mod.rs:427:9
INFO 2024-07-05T08:47:30.617942827Z [resource.labels.containerName: meta] 22: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
INFO 2024-07-05T08:47:30.617947887Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:576:18
INFO 2024-07-05T08:47:30.617952537Z [resource.labels.containerName: meta] 23: tokio::runtime::coop::with_budget
INFO 2024-07-05T08:47:30.617957057Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/coop.rs:107:5
INFO 2024-07-05T08:47:30.617962277Z [resource.labels.containerName: meta] 24: tokio::runtime::coop::budget
INFO 2024-07-05T08:47:30.617967107Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/coop.rs:73:5
INFO 2024-07-05T08:47:30.617972067Z [resource.labels.containerName: meta] 25: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
INFO 2024-07-05T08:47:30.617975397Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:575:9
INFO 2024-07-05T08:47:30.617978747Z [resource.labels.containerName: meta] 26: tokio::runtime::scheduler::multi_thread::worker::Context::run
INFO 2024-07-05T08:47:30.617983377Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:526:24
INFO 2024-07-05T08:47:30.617988777Z [resource.labels.containerName: meta] 27: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
INFO 2024-07-05T08:47:30.617993657Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:491:21
INFO 2024-07-05T08:47:30.617999187Z [resource.labels.containerName: meta] 28: tokio::runtime::context::scoped::Scoped<T>::set
INFO 2024-07-05T08:47:30.618004637Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context/scoped.rs:40:9
INFO 2024-07-05T08:47:30.618019067Z [resource.labels.containerName: meta] 29: tokio::runtime::context::set_scheduler::{{closure}}
INFO 2024-07-05T08:47:30.618023817Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context.rs:176:26
INFO 2024-07-05T08:47:30.618028277Z [resource.labels.containerName: meta] 30: std::thread::local::LocalKey<T>::try_with
INFO 2024-07-05T08:47:30.618032887Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/local.rs:284:16
INFO 2024-07-05T08:47:30.618037457Z [resource.labels.containerName: meta] 31: std::thread::local::LocalKey<T>::with
INFO 2024-07-05T08:47:30.618042087Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/local.rs:260:9
INFO 2024-07-05T08:47:30.618046557Z [resource.labels.containerName: meta] 32: tokio::runtime::context::set_scheduler
INFO 2024-07-05T08:47:30.618051107Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context.rs:176:17
INFO 2024-07-05T08:47:30.618055577Z [resource.labels.containerName: meta] 33: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
INFO 2024-07-05T08:47:30.618060827Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:486:9
INFO 2024-07-05T08:47:30.618065417Z [resource.labels.containerName: meta] 34: tokio::runtime::context::runtime::enter_runtime
INFO 2024-07-05T08:47:30.618070377Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context/runtime.rs:65:16
INFO 2024-07-05T08:47:30.618075437Z [resource.labels.containerName: meta] 35: tokio::runtime::scheduler::multi_thread::worker::run
INFO 2024-07-05T08:47:30.618080337Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:478:5
INFO 2024-07-05T08:47:30.618085797Z [resource.labels.containerName: meta] 36: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
INFO 2024-07-05T08:47:30.618090877Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:447:45
INFO 2024-07-05T08:47:30.618095977Z [resource.labels.containerName: meta] 37: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
INFO 2024-07-05T08:47:30.618101437Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/task.rs:42:21
INFO 2024-07-05T08:47:30.618107007Z [resource.labels.containerName: meta] 38: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
INFO 2024-07-05T08:47:30.618112747Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tracing-0.1.40/src/instrument.rs:321:9
INFO 2024-07-05T08:47:30.618117917Z [resource.labels.containerName: meta] 39: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
INFO 2024-07-05T08:47:30.618122977Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17
INFO 2024-07-05T08:47:30.618127647Z [resource.labels.containerName: meta] 40: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
INFO 2024-07-05T08:47:30.618142487Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/loom/std/unsafe_cell.rs:16:9
INFO 2024-07-05T08:47:30.618147537Z [resource.labels.containerName: meta] 41: tokio::runtime::task::core::Core<T,S>::poll
INFO 2024-07-05T08:47:30.618152107Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:30
INFO 2024-07-05T08:47:30.618156737Z [resource.labels.containerName: meta] 42: tokio::runtime::task::harness::poll_future::{{closure}}
INFO 2024-07-05T08:47:30.618161597Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19
INFO 2024-07-05T08:47:30.618166837Z [resource.labels.containerName: meta] 43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
INFO 2024-07-05T08:47:30.618172007Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9
INFO 2024-07-05T08:47:30.618177257Z [resource.labels.containerName: meta] 44: std::panicking::try::do_call
INFO 2024-07-05T08:47:30.618182747Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40
INFO 2024-07-05T08:47:30.618196707Z [resource.labels.containerName: meta] 45: std::panicking::try
INFO 2024-07-05T08:47:30.618202487Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19
INFO 2024-07-05T08:47:30.618207807Z [resource.labels.containerName: meta] 46: std::panic::catch_unwind
INFO 2024-07-05T08:47:30.618212867Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14
INFO 2024-07-05T08:47:30.618217507Z [resource.labels.containerName: meta] 47: tokio::runtime::task::harness::poll_future
INFO 2024-07-05T08:47:30.618221967Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18
INFO 2024-07-05T08:47:30.618226627Z [resource.labels.containerName: meta] 48: tokio::runtime::task::harness::Harness<T,S>::poll_inner
INFO 2024-07-05T08:47:30.618231177Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:208:27
INFO 2024-07-05T08:47:30.618235627Z [resource.labels.containerName: meta] 49: tokio::runtime::task::harness::Harness<T,S>::poll
INFO 2024-07-05T08:47:30.618240197Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15
INFO 2024-07-05T08:47:30.618244747Z [resource.labels.containerName: meta] 50: tokio::runtime::task::raw::RawTask::poll
INFO 2024-07-05T08:47:30.618249187Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18
INFO 2024-07-05T08:47:30.618253717Z [resource.labels.containerName: meta] 51: tokio::runtime::task::UnownedTask<S>::run
INFO 2024-07-05T08:47:30.618258057Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/mod.rs:464:9
INFO 2024-07-05T08:47:30.618262737Z [resource.labels.containerName: meta] 52: tokio::runtime::blocking::pool::Task::run
INFO 2024-07-05T08:47:30.618267907Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:159:9
INFO 2024-07-05T08:47:30.618272967Z [resource.labels.containerName: meta] 53: tokio::runtime::blocking::pool::Inner::run
INFO 2024-07-05T08:47:30.618277947Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:513:17
INFO 2024-07-05T08:47:30.618283417Z [resource.labels.containerName: meta] 54: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
INFO 2024-07-05T08:47:30.618288147Z [resource.labels.containerName: meta] at ./root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:471:13
INFO 2024-07-05T08:47:30.618293507Z [resource.labels.containerName: meta] 55: std::sys_common::backtrace::__rust_begin_short_backtrace
INFO 2024-07-05T08:47:30.618298807Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/sys_common/backtrace.rs:155:18
INFO 2024-07-05T08:47:30.618304187Z [resource.labels.containerName: meta] 56: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
INFO 2024-07-05T08:47:30.618309347Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/mod.rs:528:17
INFO 2024-07-05T08:47:30.618313767Z [resource.labels.containerName: meta] 57: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
INFO 2024-07-05T08:47:30.618318257Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/panic/unwind_safe.rs:272:9
INFO 2024-07-05T08:47:30.618322697Z [resource.labels.containerName: meta] 58: std::panicking::try::do_call
INFO 2024-07-05T08:47:30.618327157Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:552:40
INFO 2024-07-05T08:47:30.618331587Z [resource.labels.containerName: meta] 59: std::panicking::try
INFO 2024-07-05T08:47:30.618336057Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panicking.rs:516:19
INFO 2024-07-05T08:47:30.618340487Z [resource.labels.containerName: meta] 60: std::panic::catch_unwind
INFO 2024-07-05T08:47:30.618344967Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/panic.rs:146:14
INFO 2024-07-05T08:47:30.618349507Z [resource.labels.containerName: meta] 61: std::thread::Builder::spawn_unchecked_::{{closure}}
INFO 2024-07-05T08:47:30.618361347Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/thread/mod.rs:527:30
INFO 2024-07-05T08:47:30.618366287Z [resource.labels.containerName: meta] 62: core::ops::function::FnOnce::call_once{{vtable.shim}}
INFO 2024-07-05T08:47:30.618371307Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/core/src/ops/function.rs:250:5
INFO 2024-07-05T08:47:30.618376967Z [resource.labels.containerName: meta] 63: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
INFO 2024-07-05T08:47:30.618381907Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/alloc/src/boxed.rs:2020:9
INFO 2024-07-05T08:47:30.618387017Z [resource.labels.containerName: meta] 64: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
INFO 2024-07-05T08:47:30.618409617Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/alloc/src/boxed.rs:2020:9
INFO 2024-07-05T08:47:30.618415317Z [resource.labels.containerName: meta] 65: std::sys::pal::unix::thread::Thread::new::thread_start
INFO 2024-07-05T08:47:30.618420207Z [resource.labels.containerName: meta] at ./rustc/4a0cc881dcc4d800f10672747f61a94377ff6662/library/std/src/sys/pal/unix/thread.rs:108:17
INFO 2024-07-05T08:47:30.618425017Z [resource.labels.containerName: meta] 66: start_thread
INFO 2024-07-05T08:47:30.618429607Z [resource.labels.containerName: meta] at ./nptl/pthread_create.c:447:8
INFO 2024-07-05T08:47:30.618434077Z [resource.labels.containerName: meta] 67: __GI___clone3
INFO 2024-07-05T08:47:30.618438637Z [resource.labels.containerName: meta] at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

@chenzl25 if I delete and do a fresh deployment, then the above error does not occur

@chenzl25
Copy link
Contributor

chenzl25 commented Jul 5, 2024

Thanks for reporting @swapkh91. It is because the nightly image contains some developing PRs with breaking change so it is expected. What about the iceberg sink with the hive catalog? Does it have been resolved in your environment?

@swapkh91
Copy link

swapkh91 commented Jul 5, 2024

Thanks for reporting @swapkh91. It is because the nightly image contains some developing PRs with breaking change so it is expected. What about the iceberg sink with the hive catalog? Does it have been resolved in your environment?

@chenzl25 getting the same error

ERROR handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip>:<port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip>:<port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: HMSHandler Fatal error: MetaException(message:The class loader argument to this method cannot be null.)
INFO 2024-07-05T09:08:46.546597002Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:208)
INFO 2024-07-05T09:08:46.546601272Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
INFO 2024-07-05T09:08:46.546604722Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
INFO 2024-07-05T09:08:46.546608012Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
INFO 2024-07-05T09:08:46.546611382Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
INFO 2024-07-05T09:08:46.546614642Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:169)
INFO 2024-07-05T09:08:46.546617942Z [resource.labels.containerName: frontend] at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
INFO 2024-07-05T09:08:46.546621612Z [resource.labels.containerName: frontend] at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
INFO 2024-07-05T09:08:46.546625082Z [resource.labels.containerName: frontend] at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
INFO 2024-07-05T09:08:46.546628492Z [resource.labels.containerName: frontend] at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
INFO 2024-07-05T09:08:46.546631912Z [resource.labels.containerName: frontend] at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
INFO 2024-07-05T09:08:46.546635192Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
INFO 2024-07-05T09:08:46.546638542Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
INFO 2024-07-05T09:08:46.546643702Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
INFO 2024-07-05T09:08:46.546647222Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
INFO 2024-07-05T09:08:46.546650582Z [resource.labels.containerName: frontend] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:112)
INFO 2024-07-05T09:08:46.546653892Z [resource.labels.containerName: frontend] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO 2024-07-05T09:08:46.546657282Z [resource.labels.containerName: frontend] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
INFO 2024-07-05T09:08:46.546660722Z [resource.labels.containerName: frontend] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO 2024-07-05T09:08:46.546664212Z [resource.labels.containerName: frontend] at java.base/java.lang.reflect.Method.invoke(Method.java:568)
INFO 2024-07-05T09:08:46.546667532Z [resource.labels.containerName: frontend] at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:61)
INFO 2024-07-05T09:08:46.546670812Z [resource.labels.containerName: frontend] at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:73)
INFO 2024-07-05T09:08:46.546674202Z [resource.labels.containerName: frontend] at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:186)
INFO 2024-07-05T09:08:46.546677562Z [resource.labels.containerName: frontend] at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
INFO 2024-07-05T09:08:46.546680942Z [resource.labels.containerName: frontend] at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
INFO 2024-07-05T09:08:46.546684232Z [resource.labels.containerName: frontend] at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
INFO 2024-07-05T09:08:46.546693122Z [resource.labels.containerName: frontend] at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
INFO 2024-07-05T09:08:46.546696632Z [resource.labels.containerName: frontend] at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
INFO 2024-07-05T09:08:46.546700032Z [resource.labels.containerName: frontend] at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
INFO 2024-07-05T09:08:46.546703462Z [resource.labels.containerName: frontend] at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:146)
INFO 2024-07-05T09:08:46.546712162Z [resource.labels.containerName: frontend] at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
INFO 2024-07-05T09:08:46.546715532Z [resource.labels.containerName: frontend] at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
INFO 2024-07-05T09:08:46.546718952Z [resource.labels.containerName: frontend] at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:49)
INFO 2024-07-05T09:08:46.546722312Z [resource.labels.containerName: frontend] at org.apache.iceberg.rest.CatalogHandlers.loadTable(CatalogHandlers.java:269)
INFO 2024-07-05T09:08:46.546725612Z [resource.labels.containerName: frontend] at com.risingwave.connector.catalog.JniCatalogWrapper.loadTable(JniCatalogWrapper.java:47)
Caused by: javax.jdo.JDOFatalUserException: The class loader argument to this method cannot be null.
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:799)
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:651)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:694)
	at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:484)
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:421)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:376)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:159)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:59)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:720)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:698)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:692)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:775)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	... 34 more
  thread="Thread-3" class="org.apache.hadoop.hive.metastore.RetryingHMSHandler"
Exception in thread "Thread-3" org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive Metastore
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:84)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
	at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
	at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
	at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:146)
	at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
	at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
	at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:49)
	at org.apache.iceberg.rest.CatalogHandlers.loadTable(CatalogHandlers.java:269)
	at com.risingwave.connector.catalog.JniCatalogWrapper.loadTable(JniCatalogWrapper.java:47)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:86)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:112)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:61)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:73)
	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:186)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
	... 11 more
Caused by: java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
	... 23 more
Caused by: MetaException(message:The class loader argument to this method cannot be null.)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:169)
	... 29 more
Caused by: MetaException(message:The class loader argument to this method cannot be null.)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:208)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
	... 32 more
Caused by: javax.jdo.JDOFatalUserException: The class loader argument to this method cannot be null.
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:799)
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:651)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:694)
	at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:484)
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:421)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:376)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:159)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:59)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:720)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:698)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:692)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:775)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	... 34 more

@swapkh91
Copy link

swapkh91 commented Jul 5, 2024

@chenzl25
i have a Starrocks external catalog which works fine

CREATE EXTERNAL CATALOG `iceberg_hive_qa`
PROPERTIES ("aws.s3.access_key"  =  "<>",
"aws.s3.secret_key"  =  "<>",
"iceberg.catalog.hive.metastore.uris"  =  "thrift://ip:port",
"aws.s3.endpoint"  =  "http://ip:port",
"aws.s3.enable_path_style_access"  =  "true",
"type"  =  "iceberg",
"iceberg.catalog.type"  =  "hive"
)

this query returns the results on Starrocks

select * from iceberg_hive_qa.segmentdb.segment_impression_data_rw

but this gives the above error on risingwave, i think all fields are fine

CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv
WITH (
  connector='iceberg',
  type='append-only',
  force_append_only='true',
  warehouse.path='s3a://hive-iceberg/segment_db',
  s3.endpoint='http://ip:port',
  s3.access.key='<>',
  s3.secret.key='<>',
  catalog.name='iceberg_hive_qa',
  catalog.type='hive',
  catalog.url='thrift://ip:port',
  database.name = 'segmentdb',
  table.name = 'segment_impression_data_rw'
);

@chenzl25
Copy link
Contributor

chenzl25 commented Jul 5, 2024

@swapkh91 The related PR got merged yesterday, but this time is so close to the image-building time, I am not 100% sure whether it is included into the image. We can use image nightly-20240705 tomorrow once it is built.

Date:   Thu Jul 4 19:59:20 2024 +0800

    fix(iceberg): fix jni context class loader (#17478)

@swapkh91
Copy link

swapkh91 commented Jul 9, 2024

@chenzl25 not yet solved
I have used this image
ghcr.io/risingwavelabs/risingwave:nightly-20240708
noticed a few logs

when I execute on MV select * from public.segment_impression_event_mv;
I'm able to get the results, but still see this warning

WARN handle_query{mode="extended query execute" session_id=2 sql=SELECT * FROM public.segment_impression_event_mv, params = Some([])}:distributed_execute{query_id="dac7f11c-79f3-4ee2-a272-8a053bf1a945" epoch=BatchQueryEpoch { epoch: Some(Committed(6769280096731136)) }}:stage{otel.name="Stage dac7f11c-79f3-4ee2-a272-8a053bf1a945-0" query_id="dac7f11c-79f3-4ee2-a272-8a053bf1a945" stage_id=0}: risingwave_frontend::scheduler::distributed::stage: Root executor has been dropped before receive any events so the send is failed

Now when I execute create sink command, these are the logs

INFO handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: icelake::catalog: Loading base catalog config from configs: {"iceberg.catalog.type": "hive", "iceberg.table.io.secret_access_key": "", "iceberg.table.io.bucket": "hive-iceberg", "iceberg.table.io.access_key_id": "", "iceberg.table.io.s3.endpoint": "http://<ip:port>", "iceberg.table.io.s3.access-key-id": "", "iceberg.table.io.disable_config_load": "true", "iceberg.table.io.endpoint": "http://<ip:port>", "iceberg.table.io.s3.secret-access-key": "", "iceberg.catalog.name": "iceberg_hive_qa"}

INFO handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: icelake::catalog: Parsed base catalog config: BaseCatalogConfig { name: "iceberg_hive_qa", table_io_configs: {"secret_access_key": "", "s3.secret-access-key": "", "disable_config_load": "true", "bucket": "hive-iceberg", "s3.endpoint": "http://<ip:port>", "endpoint": "http://<ip:port>", "s3.access-key-id": "", "access_key_id": ""}, table_config: TableConfig { parquet_writer: ParquetWriterConfig { enable_bloom_filter: false, created_by: None, compression: SNAPPY, max_row_group_size: 1048576, write_batch_size: 1024, data_page_size: 1048576 }, rolling_writer: RollingWriterConfig { rows_per_file: 1000, target_file_size_in_bytes: 1048576 }, sorted_delete_position_writer: SortedDeletePositionWriterConfig { max_record_num: 2000 } } }

WARN handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: No Hadoop Configuration was set, using the default environment Configuration thread="Thread-87" class="org.apache.iceberg.hive.HiveCatalog"

INFO handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: Loading custom FileIO implementation: org.apache.iceberg.aws.s3.S3FileIO thread="Thread-87" class="org.apache.iceberg.CatalogUtil"

INFO handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore thread="Thread-88" class="org.apache.hadoop.hive.metastore.HiveMetaStore"

WARN handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored thread="Thread-88" class="org.apache.hadoop.hive.metastore.ObjectStore"

INFO handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: ObjectStore, initialize called thread="Thread-88" class="org.apache.hadoop.hive.metastore.ObjectStore"

ERROR handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: Error loading PartitionExpressionProxy: MetaException(message:org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore class not found)

ERROR handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: risingwave_connector_node: java.lang.RuntimeException: Error loading PartitionExpressionProxy: org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore class not found
	at org.apache.hadoop.hive.metastore.ObjectStore.createExpressionProxy(ObjectStore.java:542)
	at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:495)
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:421)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:376)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:159)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:59)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:720)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:698)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:692)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:769)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:169)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:112)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:61)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:73)
	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:186)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
	at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
	at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
	at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:146)
	at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
	at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
	at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:49)
	at org.apache.iceberg.rest.CatalogHandlers.loadTable(CatalogHandlers.java:269)
	at com.risingwave.connector.catalog.JniCatalogWrapper.loadTable(JniCatalogWrapper.java:47)

WARN handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: jni::wrapper::objects::global_ref: Dropping a GlobalRef in a detached thread. Fix your code if this message appears frequently (see the GlobalRef docs)

ERROR handle_query{mode="extended query execute" session_id=2 sql=CREATE SINK public.segment_impression_data FROM public.segment_impression_event_mv WITH (connector = 'iceberg', type = 'append-only', force_append_only = 'true', warehouse.path = 's3a://hive-iceberg/segment_db', s3.endpoint = 'http://<ip:port>', s3.access.key = [REDACTED], s3.secret.key = [REDACTED], catalog.name = 'iceberg_hive_qa', catalog.type = 'hive', catalog.url = 'thrift://<ip:port>', database.name = 'segmentdb', table.name = 'segment_impression_data_rw')}: pgwire::pg_protocol: error when process message error=Failed to execute the statement: connector error: Iceberg error: Unexpected => Failed to load iceberg table., source: Failed to load iceberg table: segmentdb.segment_impression_data_rw: Java exception was thrown

@chenzl25
Copy link
Contributor

I google this error and found that RisingWave jar dependency lacks hive-exec. Let me add it.

java.lang.RuntimeException: Error loading PartitionExpressionProxy: org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore class not found

@fuyufjh fuyufjh modified the milestones: release-1.10, release-1.11 Jul 10, 2024
@chenzl25
Copy link
Contributor

Hi @swapkh91 sorry for being late to rely you. I built a new image ghcr.io/risingwavelabs/risingwave:git-064fefe26e5c36edbdaa07d8622753887ed725f0 based on this PR #17718 Could you please try to see whether it works for your case?

@chenzl25 chenzl25 removed this from the release-2.0 milestone Aug 19, 2024
@chenzl25 chenzl25 linked a pull request Aug 19, 2024 that will close this issue
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working user-feedback
Projects
None yet
6 participants