diff --git a/RELEASE-NOTES.md b/RELEASE-NOTES.md index 8620c4dccfbabf..dd1830eaa87c41 100644 --- a/RELEASE-NOTES.md +++ b/RELEASE-NOTES.md @@ -19,6 +19,7 @@ 1. Proxy: Add query parameters and check for mysql kill processId - [#33274](https://github.com/apache/shardingsphere/pull/33274) 1. Agent: Simplify the use of Agent's Docker Image - [#33356](https://github.com/apache/shardingsphere/pull/33356) 1. Build: Avoid using `-proc:full` when compiling ShardingSphere with OpenJDK23 - [#33681](https://github.com/apache/shardingsphere/pull/33681) +1. Doc: Adds documentation for HiveServer2 support - [#33717](https://github.com/apache/shardingsphere/pull/33717) ### Bug Fixes diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md index 661c116ab3fdb3..d4f2a24d928af4 100644 --- a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md +++ b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md @@ -289,86 +289,9 @@ Caused by: java.io.UnsupportedEncodingException: Codepage Cp1252 is not supporte ClickHouse 不支持 ShardingSphere 集成级别的本地事务,XA 事务和 Seata AT 模式事务,更多讨论位于 https://github.com/ClickHouse/clickhouse-docs/issues/2300 。 -7. 当需要通过 ShardingSphere JDBC 使用 Hive 方言时,受 https://issues.apache.org/jira/browse/HIVE-28445 影响, -用户不应该使用 `classifier` 为 `standalone` 的 `org.apache.hive:hive-jdbc:4.0.1`,以避免依赖冲突。 -可能的配置例子如下, - -```xml - - - - org.apache.shardingsphere - shardingsphere-jdbc - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-infra-database-hive - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-parser-sql-hive - ${shardingsphere.version} - - - org.apache.hive - hive-jdbc - 4.0.1 - - - org.apache.hive - hive-service - 4.0.1 - - - org.apache.hadoop - hadoop-client-api - 3.3.6 - - - -``` - -这会导致大量的依赖冲突。 -如果用户不希望手动解决潜在的数千行的依赖冲突,可以使用 HiveServer2 JDBC Driver 的 `Thin JAR` 的第三方构建。 -可能的配置例子如下, - -```xml - - - - org.apache.shardingsphere - shardingsphere-jdbc - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-infra-database-hive - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-parser-sql-hive - ${shardingsphere.version} - - - io.github.linghengqian - hive-server2-jdbc-driver-thin - 1.5.0 - - - com.fasterxml.woodstox - woodstox-core - - - - - -``` - -受 https://github.com/grpc/grpc-java/issues/10601 影响,用户如果在项目中引入了 `org.apache.hive:hive-jdbc`, +7. 受 https://github.com/grpc/grpc-java/issues/10601 影响,用户如果在项目中引入了 `org.apache.hive:hive-jdbc`, 则需要在项目的 classpath 的 `META-INF/native-image/io.grpc/grpc-netty-shaded` 文件夹下创建包含如下内容的文件 `native-image.properties`, + ```properties Args=--initialize-at-run-time=\ io.grpc.netty.shaded.io.netty.channel.ChannelHandlerMask,\ @@ -400,55 +323,6 @@ Args=--initialize-at-run-time=\ io.grpc.netty.shaded.io.netty.util.AttributeKey ``` -为了能够使用 `delete` 等 DML SQL 语句,当连接到 HiveServer2 时, -用户应当考虑在 ShardingSphere JDBC 中仅使用支持 ACID 的表。`apache/hive` 提供了多种事务解决方案。 - -第1种选择是使用 ACID 表,可能的建表流程如下。 -由于其过时的基于目录的表格式,用户可能不得不在 DML 语句执行前后进行等待,以让 HiveServer2 完成低效的 DML 操作。 - -```sql -set metastore.compactor.initiator.on=true; -set metastore.compactor.cleaner.on=true; -set metastore.compactor.worker.threads=5; - -set hive.support.concurrency=true; -set hive.exec.dynamic.partition.mode=nonstrict; -set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; - -CREATE TABLE IF NOT EXISTS t_order -( - order_id BIGINT, - order_type INT, - user_id INT NOT NULL, - address_id BIGINT NOT NULL, - status VARCHAR(50), - PRIMARY KEY (order_id) disable novalidate -) CLUSTERED BY (order_id) INTO 2 BUCKETS STORED AS ORC TBLPROPERTIES ('transactional' = 'true'); -``` - -第2种选择是使用 Iceberg 表,可能的建表流程如下。 -Apache Iceberg 表格式有望在未来几年取代传统的 Hive 表格式, -参考 https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/ 。 - -```sql -set iceberg.mr.schema.auto.conversion=true; - -CREATE TABLE IF NOT EXISTS t_order -( - order_id BIGINT, - order_type INT, - user_id INT NOT NULL, - address_id BIGINT NOT NULL, - status VARCHAR(50), - PRIMARY KEY (order_id) disable novalidate -) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2'); -``` - -由于 HiveServer2 JDBC Driver 未实现 `java.sql.DatabaseMetaData#getURL()`, -ShardingSphere 做了模糊处理,因此用户暂时仅可通过 HikariCP 连接 HiveServer2。 - -HiveServer2 不支持 ShardingSphere 集成级别的本地事务,XA 事务和 Seata AT 模式事务,更多讨论位于 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions 。 - 8. 由于 https://github.com/oracle/graal/issues/7979 的影响, 对应 `com.oracle.database.jdbc:ojdbc8` Maven 模块的 Oracle JDBC Driver 无法在 GraalVM Native Image 下使用。 diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md index 4783a5a43564f8..38bfea154192dc 100644 --- a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md +++ b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md @@ -302,88 +302,10 @@ Possible configuration examples are as follows, ClickHouse does not support local transactions, XA transactions, and Seata AT mode transactions at the ShardingSphere integration level. More discussion is at https://github.com/ClickHouse/clickhouse-docs/issues/2300 . -7. When using the Hive dialect through ShardingSphere JDBC, affected by https://issues.apache.org/jira/browse/HIVE-28445 , - users should not use `org.apache.hive:hive-jdbc:4.0.1` with `classifier` as `standalone` to avoid dependency conflicts. - Possible configuration examples are as follows, - -```xml - - - - org.apache.shardingsphere - shardingsphere-jdbc - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-infra-database-hive - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-parser-sql-hive - ${shardingsphere.version} - - - org.apache.hive - hive-jdbc - 4.0.1 - - - org.apache.hive - hive-service - 4.0.1 - - - org.apache.hadoop - hadoop-client-api - 3.3.6 - - - -``` - -This can lead to a large number of dependency conflicts. -If the user does not want to manually resolve potentially thousands of lines of dependency conflicts, -a third-party build of the HiveServer2 JDBC Driver `Thin JAR` can be used. -An example of a possible configuration is as follows, - -```xml - - - - org.apache.shardingsphere - shardingsphere-jdbc - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-infra-database-hive - ${shardingsphere.version} - - - org.apache.shardingsphere - shardingsphere-parser-sql-hive - ${shardingsphere.version} - - - io.github.linghengqian - hive-server2-jdbc-driver-thin - 1.5.0 - - - com.fasterxml.woodstox - woodstox-core - - - - - -``` - -Affected by https://github.com/grpc/grpc-java/issues/10601 , should users incorporate `org.apache.hive:hive-service` into their project, +7. Affected by https://github.com/grpc/grpc-java/issues/10601 , should users incorporate `org.apache.hive:hive-jdbc` into their project, it is imperative to create a file named `native-image.properties` within the directory `META-INF/native-image/io.grpc/grpc-netty-shaded` of the classpath, containing the following content, + ```properties Args=--initialize-at-run-time=\ io.grpc.netty.shaded.io.netty.channel.ChannelHandlerMask,\ @@ -415,57 +337,6 @@ Args=--initialize-at-run-time=\ io.grpc.netty.shaded.io.netty.util.AttributeKey ``` -In order to be able to use DML SQL statements such as `delete`, when connecting to HiveServer2, -users should consider using only ACID-supported tables in ShardingSphere JDBC. `apache/hive` provides a variety of transaction solutions. - -The first option is to use ACID tables, and the possible table creation process is as follows. -Due to its outdated catalog-based table format, -users may have to wait before and after DML statement execution to let HiveServer2 complete the inefficient DML operations. - -```sql -set metastore.compactor.initiator.on=true; -set metastore.compactor.cleaner.on=true; -set metastore.compactor.worker.threads=5; - -set hive.support.concurrency=true; -set hive.exec.dynamic.partition.mode=nonstrict; -set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; - -CREATE TABLE IF NOT EXISTS t_order -( - order_id BIGINT, - order_type INT, - user_id INT NOT NULL, - address_id BIGINT NOT NULL, - status VARCHAR(50), - PRIMARY KEY (order_id) disable novalidate -) CLUSTERED BY (order_id) INTO 2 BUCKETS STORED AS ORC TBLPROPERTIES ('transactional' = 'true'); -``` - -The second option is to use Iceberg table. The possible table creation process is as follows. -Apache Iceberg table format is poised to replace the traditional Hive table format in the coming years, -see https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/ . - -```sql -set iceberg.mr.schema.auto.conversion=true; - -CREATE TABLE IF NOT EXISTS t_order -( - order_id BIGINT, - order_type INT, - user_id INT NOT NULL, - address_id BIGINT NOT NULL, - status VARCHAR(50), - PRIMARY KEY (order_id) disable novalidate -) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2'); -``` - -Since HiveServer2 JDBC Driver does not implement `java.sql.DatabaseMetaData#getURL()`, -ShardingSphere has done some obfuscation, so users can only connect to HiveServer2 through HikariCP for now. - -HiveServer2 does not support local transactions, XA transactions, and Seata AT mode transactions at the ShardingSphere integration level. -More discussion is available at https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions . - 8. Due to https://github.com/oracle/graal/issues/7979 , the Oracle JDBC Driver corresponding to the `com.oracle.database.jdbc:ojdbc8` Maven module cannot be used under GraalVM Native Image. diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.cn.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.cn.md new file mode 100644 index 00000000000000..da5c7700bb8084 --- /dev/null +++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.cn.md @@ -0,0 +1,318 @@ ++++ +title = "HiveServer2" +weight = 6 ++++ + +## 背景信息 + +ShardingSphere 默认情况下不提供对 `org.apache.hive.jdbc.HiveDriver` 的 `driverClassName` 的支持。 +ShardingSphere 对 HiveServer2 JDBC Driver 的支持位于可选模块中。 + +## 前提条件 + +要在 ShardingSphere 的配置文件为数据节点使用类似 `jdbc:hive2://localhost:10000/` 的 `jdbcUrl`, +可能的 Maven 依赖关系如下, + +```xml + + + org.apache.shardingsphere + shardingsphere-jdbc + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-infra-database-hive + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-parser-sql-hive + ${shardingsphere.version} + + + org.apache.hive + hive-jdbc + 4.0.1 + + + org.apache.hive + hive-service + 4.0.1 + + + org.apache.hadoop + hadoop-client-api + 3.3.6 + + +``` + +### 可选的解决依赖冲突的捷径 + +直接使用 `org.apache.hive:hive-jdbc:4.0.1` 会导致大量的依赖冲突。 +如果用户不希望手动解决潜在的数千行的依赖冲突,可以使用 HiveServer2 JDBC Driver 的 Thin JAR 的第三方构建。 +可能的配置例子如下, + +```xml + + + org.apache.shardingsphere + shardingsphere-jdbc + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-infra-database-hive + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-parser-sql-hive + ${shardingsphere.version} + + + io.github.linghengqian + hive-server2-jdbc-driver-thin + 1.5.0 + + + com.fasterxml.woodstox + woodstox-core + + + + +``` + +## 配置示例 + +### 启动 HiveServer2 + +编写 Docker Compose 文件来启动 HiveServer2。 + +```yaml +services: + hive-server2: + image: apache/hive:4.0.1 + environment: + SERVICE_NAME: hiveserver2 + ports: + - "10000:10000" + expose: + - 10002 +``` + +### 创建业务表 + +通过第三方工具在 HiveServer2 内创建业务库与业务表。 +以 DBeaver CE 为例,使用 `jdbc:hive2://localhost:10000/` 的 `jdbcUrl` 连接至 HiveServer2,`username` 和 `password` 留空。 + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +CREATE DATABASE demo_ds_0; +CREATE DATABASE demo_ds_1; +CREATE DATABASE demo_ds_2; +``` + +分别使用 `jdbc:hive2://localhost:10000/demo_ds_0` , +`jdbc:hive2://localhost:10000/demo_ds_1` 和 `jdbc:hive2://localhost:10000/demo_ds_2` 的 `jdbcUrl` 连接至 HiveServer2 来执行如下 SQL, + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +set iceberg.mr.schema.auto.conversion=true; + +CREATE TABLE IF NOT EXISTS t_order +( + order_id BIGINT, + order_type INT, + user_id INT NOT NULL, + address_id BIGINT NOT NULL, + status VARCHAR(50), + PRIMARY KEY (order_id) disable novalidate +) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2'); + +TRUNCATE TABLE t_order; +``` + +### 在业务项目创建 ShardingSphere 数据源 + +在业务项目引入`前提条件`涉及的依赖后,在业务项目的 classpath 上编写 ShardingSphere 数据源的配置文件`demo.yaml`, + +```yaml +dataSources: + ds_0: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.apache.hive.jdbc.HiveDriver + jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_0 + ds_1: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.apache.hive.jdbc.HiveDriver + jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_1 + ds_2: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.apache.hive.jdbc.HiveDriver + jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_2 +rules: +- !SHARDING + tables: + t_order: + actualDataNodes: + keyGenerateStrategy: + column: order_id + keyGeneratorName: snowflake + defaultDatabaseStrategy: + standard: + shardingColumn: user_id + shardingAlgorithmName: inline + shardingAlgorithms: + inline: + type: INLINE + props: + algorithm-expression: ds_${user_id % 2} + keyGenerators: + snowflake: + type: SNOWFLAKE +``` + +### 享受集成 + +创建 ShardingSphere 的数据源, + +```java +import com.zaxxer.hikari.HikariConfig; +import com.zaxxer.hikari.HikariDataSource; +import javax.sql.DataSource; +public class ExampleUtils { + DataSource createDataSource() { + HikariConfig config = new HikariConfig(); + config.setJdbcUrl("jdbc:shardingsphere:classpath:demo.yaml"); + config.setDriverClassName("org.apache.shardingsphere.driver.ShardingSphereDriver"); + return new HikariDataSource(config); + } +} +``` + +可直接在此`javax.sql.DataSource`相关的 ShardingSphere DataSource 上执行逻辑 SQL,享受它, + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +INSERT INTO t_order (user_id, order_type, address_id, status) VALUES (1, 1, 1, "INSERT_TEST"); +DELETE FROM t_order WHERE order_id=1; +``` + +## 使用限制 + +### 版本限制 + +HiveServer2 `2.x` 和 HiveServer2 `3.x` 发行版的生命周期已经结束。 +参考 https://lists.apache.org/thread/0mh4hvpllzv877bkx1f9srv1c3hlbtt9 和 https://lists.apache.org/thread/mpzrv7v1hqqo4cmp0zorswnbvd7ltmbp 。 +ShardingSphere 仅针对 HiveServer2 `4.0.1` 进行集成测试。 + +### HiveServer2 JDBC Driver 的 Uber JAR 限制 + +受 https://issues.apache.org/jira/browse/HIVE-28445 影响, +用户不应该使用 `classifier` 为 `standalone` 的 `org.apache.hive:hive-jdbc:4.0.1`,以避免依赖冲突。 + +### 嵌入式 HiveServer2 限制 + +嵌入式 HiveServer2 不再被 Hive 社区认为是用户友好的,用户不应该尝试通过 ShardingSphere 的配置文件启动 嵌入式 HiveServer2。 +用户总应该通过 HiveServer2 的 Docker Image `apache/hive:4.0.1` 启动 HiveServer2。 +参考 https://issues.apache.org/jira/browse/HIVE-28418 。 + +### Hadoop 限制 + +用户仅可使用 Hadoop `3.3.6` 来作为 HiveServer2 JDBC Driver `4.0.1` 的底层 Hadoop 依赖。 +HiveServer2 JDBC Driver `4.0.1` 不支持 Hadoop `3.4.1`, +参考 https://github.com/apache/hive/pull/5500 。 + +### 数据库连接池限制 + +由于 `org.apache.hive.jdbc.DatabaseMetaData` 未实现 `java.sql.DatabaseMetaData#getURL()`, +ShardingSphere 在`org.apache.shardingsphere.infra.database.DatabaseTypeEngine#getStorageType(javax.sql.DataSource)`处做了模糊处理, +因此用户暂时仅可通过 `com.zaxxer.hikari.HikariDataSource` 的数据库连接池连接 HiveServer2。 + +若用户需要通过 `com.alibaba.druid.pool.DruidDataSource` 的数据库连接池连接 HiveServer2, +用户应当考虑在 Hive 的主分支实现 `java.sql.DatabaseMetaData#getURL()`, +而不是尝试修改 ShardingSphere 的内部类。 + +### SQL 限制 + +ShardingSphere JDBC DataSource 尚不支持执行 HiveServer2 的 `SET` 语句,`CREATE TABLE` 语句和 `TRUNCATE TABLE` 语句。 + +用户应考虑为 ShardingSphere 提交包含单元测试的 PR。 + +### jdbcURL 限制 + +对于 ShardingSphere 的配置文件,对 HiveServer2 的 jdbcURL 存在限制。引入前提, +HiveServer2 的 jdbcURL 格式为 `jdbc:hive2://:,:/dbName;initFile=;sess_var_list?hive_conf_list#hive_var_list`。 +ShardingSphere 当前对参数的解析仅支持以`jdbc:hive2://localhost:10000/demo_ds_1;initFile=/tmp/init.sql`为代表的`;hive_conf_list`部分。 + +若用户需使用`;sess_var_list`或`#hive_var_list`的 jdbcURL 参数,考虑为 ShardingSphere 提交包含单元测试的 PR。 + + +### 分布式序列限制 + +由于 `org.apache.hive.jdbc.HiveStatement` 未实现 `java.sql.Statement#getGeneratedKeys()`, +ShardingSphere JDBC Connection 无法通过 `java.sql.Statement.RETURN_GENERATED_KEYS` 获得 ShardingSphere 生成的雪花 ID 等分布式序列。 + +若用户需要通过 `java.sql.Statement.RETURN_GENERATED_KEYS` 从 HiveServer2 获得 ShardingSphere 生成的雪花 ID 等分布式序列, +用户应当考虑在 Hive 的主分支实现 `java.sql.DatabaseMetaData#getURL()`, +而不是尝试修改 ShardingSphere 的内部类。 + +### 在 ShardingSphere 数据源上使用 DML SQL 语句的前提条件 + +为了能够使用 `delete` 等 DML SQL 语句,当连接到 HiveServer2 时,用户应当考虑在 ShardingSphere JDBC 中仅使用支持 ACID 的表。 +`apache/hive` 提供了多种事务解决方案。 + +第1种选择是使用 ACID 表,可能的建表流程如下。 +由于其过时的基于目录的表格式,用户可能不得不在 DML 语句执行前后进行等待,以让 HiveServer2 完成低效的 DML 操作。 + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +set metastore.compactor.initiator.on=true; +set metastore.compactor.cleaner.on=true; +set metastore.compactor.worker.threads=5; + +set hive.support.concurrency=true; +set hive.exec.dynamic.partition.mode=nonstrict; +set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; + +CREATE TABLE IF NOT EXISTS t_order +( + order_id BIGINT, + order_type INT, + user_id INT NOT NULL, + address_id BIGINT NOT NULL, + status VARCHAR(50), + PRIMARY KEY (order_id) disable novalidate +) CLUSTERED BY (order_id) INTO 2 BUCKETS STORED AS ORC TBLPROPERTIES ('transactional' = 'true'); +``` + +第2种选择是使用 Iceberg 表,可能的建表流程如下。Apache Iceberg 表格式有望在未来几年取代传统的 Hive 表格式, +参考 https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/ 。 + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +set iceberg.mr.schema.auto.conversion=true; + +CREATE TABLE IF NOT EXISTS t_order +( + order_id BIGINT, + order_type INT, + user_id INT NOT NULL, + address_id BIGINT NOT NULL, + status VARCHAR(50), + PRIMARY KEY (order_id) disable novalidate +) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2'); +``` + +### 事务限制 + +HiveServer2 不支持 ShardingSphere 集成级别的本地事务,XA 事务或 Seata 的 AT 模式事务, +更多讨论位于 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions 。 + +### DBeaver CE 限制 + +当用户使用 DBeaver CE 连接至 HiveServer2 时,需确保 DBeaver CE 版本大于或等于 `24.2.5`。 +参考 https://github.com/dbeaver/dbeaver/pull/35059 。 diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.en.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.en.md new file mode 100644 index 00000000000000..bd8b67d5f1f1c0 --- /dev/null +++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.en.md @@ -0,0 +1,329 @@ ++++ +title = "HiveServer2" +weight = 6 ++++ + +## Background Information + +ShardingSphere does not provide support for `driverClassName` of `org.apache.hive.jdbc.HiveDriver` by default. + +ShardingSphere's support for HiveServer2 JDBC Driver is in the optional module. + +## Prerequisites + +To use a `jdbcUrl` like `jdbc:hive2://localhost:10000/` for the data node in the ShardingSphere configuration file, +The possible Maven dependencies are as follows. + +```xml + + + org.apache.shardingsphere + shardingsphere-jdbc + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-infra-database-hive + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-parser-sql-hive + ${shardingsphere.version} + + + org.apache.hive + hive-jdbc + 4.0.1 + + + org.apache.hive + hive-service + 4.0.1 + + + org.apache.hadoop + hadoop-client-api + 3.3.6 + + +``` + +### Optional shortcut to resolve dependency conflicts + +Using `org.apache.hive:hive-jdbc:4.0.1` directly will cause a large number of dependency conflicts. +If users do not want to manually resolve potentially thousands of lines of dependency conflicts, +they can use a third-party build of the HiveServer2 JDBC Driver Thin JAR. +The following is an example of a possible configuration, + +```xml + + + org.apache.shardingsphere + shardingsphere-jdbc + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-infra-database-hive + ${shardingsphere.version} + + + org.apache.shardingsphere + shardingsphere-parser-sql-hive + ${shardingsphere.version} + + + io.github.linghengqian + hive-server2-jdbc-driver-thin + 1.5.0 + + + com.fasterxml.woodstox + woodstox-core + + + + +``` + +## Configuration Example + +### Start HiveServer2 + +Write a Docker Compose file to start HiveServer2. + +```yaml +services: + hive-server2: + image: apache/hive:4.0.1 + environment: + SERVICE_NAME: hiveserver2 + ports: + - "10000:10000" + expose: + - 10002 +``` + +### Create business tables + +Use a third-party tool to create a business database and business table in HiveServer2. +Taking DBeaver CE as an example, +use the `jdbcUrl` of `jdbc:hive2://localhost:10000/` to connect to HiveServer2, and leave `username` and `password` blank. + + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +CREATE DATABASE demo_ds_0; +CREATE DATABASE demo_ds_1; +CREATE DATABASE demo_ds_2; +``` + +Use the `jdbcUrl` of `jdbc:hive2://localhost:10000/demo_ds_0`, +`jdbc:hive2://localhost:10000/demo_ds_1` and `jdbc:hive2://localhost:10000/demo_ds_2` to connect to HiveServer2 to execute the following SQL, + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +set iceberg.mr.schema.auto.conversion=true; + +CREATE TABLE IF NOT EXISTS t_order +( + order_id BIGINT, + order_type INT, + user_id INT NOT NULL, + address_id BIGINT NOT NULL, + status VARCHAR(50), + PRIMARY KEY (order_id) disable novalidate +) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2'); + +TRUNCATE TABLE t_order; +``` + +### Create ShardingSphere data source in business projects + +After the business project introduces the dependencies involved in `prerequisites`, +write the ShardingSphere data source configuration file `demo.yaml` on the classpath of the business project. + +```yaml +dataSources: + ds_0: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.apache.hive.jdbc.HiveDriver + jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_0 + ds_1: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.apache.hive.jdbc.HiveDriver + jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_1 + ds_2: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.apache.hive.jdbc.HiveDriver + jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_2 +rules: +- !SHARDING + tables: + t_order: + actualDataNodes: + keyGenerateStrategy: + column: order_id + keyGeneratorName: snowflake + defaultDatabaseStrategy: + standard: + shardingColumn: user_id + shardingAlgorithmName: inline + shardingAlgorithms: + inline: + type: INLINE + props: + algorithm-expression: ds_${user_id % 2} + keyGenerators: + snowflake: + type: SNOWFLAKE +``` + +### Enjoy the integration + +Create a ShardingSphere data source, + +```java +import com.zaxxer.hikari.HikariConfig; +import com.zaxxer.hikari.HikariDataSource; +import javax.sql.DataSource; +public class ExampleUtils { + DataSource createDataSource() { + HikariConfig config = new HikariConfig(); + config.setJdbcUrl("jdbc:shardingsphere:classpath:demo.yaml"); + config.setDriverClassName("org.apache.shardingsphere.driver.ShardingSphereDriver"); + return new HikariDataSource(config); + } +} +``` + +You can directly execute logical SQL on the ShardingSphere DataSource related to this `javax.sql.DataSource`, enjoy it, + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +INSERT INTO t_order (user_id, order_type, address_id, status) VALUES (1, 1, 1, "INSERT_TEST"); +DELETE FROM t_order WHERE order_id=1; +``` + +## Usage Restrictions + +### Version Restrictions + +The lifecycle of HiveServer2 `2.x` and HiveServer2 `3.x` releases has ended. +Refer to https://lists.apache.org/thread/0mh4hvpllzv877bkx1f9srv1c3hlbtt9 and https://lists.apache.org/thread/mpzrv7v1hqqo4cmp0zorswnbvd7ltmbp . +ShardingSphere is only integrated tested for HiveServer2 `4.0.1`. + +### Uber JAR Limitation of HiveServer2 JDBC Driver + +Affected by https://issues.apache.org/jira/browse/HIVE-28445, +users should not use `org.apache.hive:hive-jdbc:4.0.1` with `classifier` as `standalone` to avoid dependency conflicts. + +### Embedded HiveServer2 Limitation + +Embedded HiveServer2 is no longer considered user-friendly by the Hive community, +and users should not try to start embedded HiveServer2 through ShardingSphere's configuration file. +Users should always start HiveServer2 through HiveServer2's Docker Image `apache/hive:4.0.1`. +Reference https://issues.apache.org/jira/browse/HIVE-28418. + +### Hadoop Limitations + +Users can only use Hadoop `3.3.6` as the underlying Hadoop dependency of HiveServer2 JDBC Driver `4.0.1`. +HiveServer2 JDBC Driver `4.0.1` does not support Hadoop `3.4.1`, +Reference https://github.com/apache/hive/pull/5500. + +### Database connection pool limitation + +Since `org.apache.hive.jdbc.DatabaseMetaData` does not implement `java.sql.DatabaseMetaData#getURL()`, +ShardingSphere has done fuzzy processing at `org.apache.shardingsphere.infra.database.DatabaseTypeEngine#getStorageType(javax.sql.DataSource)`, +so users can only connect to HiveServer2 through the database connection pool of `com.zaxxer.hikari.HikariDataSource` for the time being. + +If users need to connect to HiveServer2 through the database connection pool of `com.alibaba.druid.pool.DruidDataSource`, +users should consider implementing `java.sql.DatabaseMetaData#getURL()` in the main branch of Hive, +rather than trying to modify the internal classes of ShardingSphere. + +### SQL Limitations + +ShardingSphere JDBC DataSource does not yet support executing HiveServer2's `SET` statement, +`CREATE TABLE` statement, and `TRUNCATE TABLE` statement. + +Users should consider submitting a PR containing unit tests for ShardingSphere. + +### jdbcURL Restrictions + +For ShardingSphere configuration files, there are restrictions on HiveServer2's jdbcURL. Introduction premise, +HiveServer2's jdbcURL format is `jdbc:hive2://:,:/dbName;initFile=;sess_var_list?hive_conf_list#hive_var_list`. + +ShardingSphere currently only supports the `;hive_conf_list` part represented by `jdbc:hive2://localhost:10000/demo_ds_1;initFile=/tmp/init.sql`. + +If users need to use the jdbcURL parameters of `;sess_var_list` or `#hive_var_list`, +consider submitting a PR containing unit tests for ShardingSphere. + +### Distributed sequence limitations + +Since `org.apache.hive.jdbc.HiveStatement` does not implement `java.sql.Statement#getGeneratedKeys()`, +ShardingSphere JDBC Connection cannot obtain distributed sequences such as Snowflake ID generated by ShardingSphere through `java.sql.Statement.RETURN_GENERATED_KEYS`. + +If users need to obtain distributed sequences such as Snowflake ID generated by ShardingSphere from HiveServer2 through `java.sql.Statement.RETURN_GENERATED_KEYS`, +users should consider implementing `java.sql.DatabaseMetaData#getURL()` in the main branch of Hive, +rather than trying to modify the internal classes of ShardingSphere. + +### Prerequisites for using DML SQL statements on ShardingSphere data sources + +In order to be able to use DML SQL statements such as `delete`, +users should consider using only ACID-supported tables in ShardingSphere JDBC when connecting to HiveServer2. +`apache/hive` provides multiple transaction solutions. + +The first option is to use ACID tables, and the possible table creation process is as follows. +Due to its outdated catalog-based table format, +users may have to wait before and after the execution of DML statements to allow HiveServer2 to complete inefficient DML operations. + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +set metastore.compactor.initiator.on=true; +set metastore.compactor.cleaner.on=true; +set metastore.compactor.worker.threads=5; + +set hive.support.concurrency=true; +set hive.exec.dynamic.partition.mode=nonstrict; +set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; + +CREATE TABLE IF NOT EXISTS t_order +( + order_id BIGINT, + order_type INT, + user_id INT NOT NULL, + address_id BIGINT NOT NULL, + status VARCHAR(50), + PRIMARY KEY (order_id) disable novalidate +) CLUSTERED BY (order_id) INTO 2 BUCKETS STORED AS ORC TBLPROPERTIES ('transactional' = 'true'); +``` + +The second option is to use Iceberg tables. The possible table creation process is as follows. Apache Iceberg table format is expected to replace the traditional Hive table format in the next few years. +Refer to https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/ . + +```sql +-- noinspection SqlNoDataSourceInspectionForFile +set iceberg.mr.schema.auto.conversion=true; + +CREATE TABLE IF NOT EXISTS t_order +( + order_id BIGINT, + order_type INT, + user_id INT NOT NULL, + address_id BIGINT NOT NULL, + status VARCHAR(50), + PRIMARY KEY (order_id) disable novalidate +) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2'); +``` + +### Transaction Limitations + +HiveServer2 does not support local transactions at the ShardingSphere integration level, XA transactions, or Seata's AT mode transactions. +For more discussion, please visit https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions. + +### DBeaver CE Limitations + +When users use DBeaver CE to connect to HiveServer2, they need to ensure that the DBeaver CE version is greater than or equal to `24.2.5`. + +See https://github.com/dbeaver/dbeaver/pull/35059. diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md index 1ff66f3ce460b7..6f011d9187a340 100644 --- a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md +++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md @@ -4,7 +4,7 @@ weight = 6 +++ ShardingSphere 默认情况下不提供对 `org.testcontainers.jdbc.ContainerDatabaseDriver` 的 `driverClassName` 的支持。 -要在 ShardingSphere 的配置文件为数据节点使用类似 `jdbc:tc:postgresql:17.1-bookworm://test-native-databases-postgres/demo_ds_0` 的 `jdbcUrl`, +要在 ShardingSphere 的配置文件为数据节点使用类似 `jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0` 的 `jdbcUrl`, 可能的 Maven 依赖关系如下, ```xml @@ -28,7 +28,27 @@ ShardingSphere 默认情况下不提供对 `org.testcontainers.jdbc.ContainerDat ``` -`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` 为 testcontainers-java 分格的 jdbcURL 提供支持, +要使用 `org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` 模块, +用户设备总是需要安装 Docker Engine 或符合 https://java.testcontainers.org/supported_docker_environment/ 要求的 alternative container runtimes。 +此时可在 ShardingSphere 的 YAML 配置文件正常使用 `jdbc:tc:postgresql:` 前缀的 jdbcURL。 + +```yaml +dataSources: + ds_0: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0 + ds_1: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_1 + ds_2: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_2 +``` + +`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` 为 testcontainers-java 风格的 jdbcURL 提供支持, 包括但不限于, 1. 为 `jdbc:tc:clickhouse:` 的 jdbcURL 前缀提供支持的 Maven 模块 `org.testcontainers:clickhouse:1.20.3` diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md index 0593e098d52eae..e2698a09e71384 100644 --- a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md +++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md @@ -4,7 +4,7 @@ weight = 6 +++ ShardingSphere does not provide support for `driverClassName` of `org.testcontainers.jdbc.ContainerDatabaseDriver` by default. -To use `jdbcUrl` like `jdbc:tc:postgresql:17.1-bookworm://test-native-databases-postgres/demo_ds_0` for data nodes in ShardingSphere's configuration file, +To use `jdbcUrl` like `jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0` for data nodes in ShardingSphere's configuration file, the possible Maven dependencies are as follows, ```xml @@ -28,7 +28,27 @@ the possible Maven dependencies are as follows, ``` -`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` provides support for jdbcURL in the testcontainers-java partition, +At this time, you can use the jdbcURL with the prefix `jdbc:tc:postgresql:` normally in the YAML configuration file of ShardingSphere. + +```yaml +dataSources: + ds_0: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0 + ds_1: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_1 + ds_2: + dataSourceClassName: com.zaxxer.hikari.HikariDataSource + driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_2 +``` + +To use the `org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` module, +the user machine always needs to have Docker Engine or alternative container runtimes that comply with https://java.testcontainers.org/supported_docker_environment/ installed. +`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` provides support for testcontainers-java style jdbcURL, including but not limited to, 1. Maven module `org.testcontainers:clickhouse:1.20.3` that provides support for jdbcURL prefixes for `jdbc:tc:clickhouse:` diff --git a/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/generated-reachability-metadata/reflect-config.json b/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/generated-reachability-metadata/reflect-config.json index 57bbb3722e1554..7ec9d7b5ed2d85 100644 --- a/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/generated-reachability-metadata/reflect-config.json +++ b/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/generated-reachability-metadata/reflect-config.json @@ -71,6 +71,10 @@ "condition":{"typeReachable":"org.apache.shardingsphere.mode.repository.standalone.jdbc.JDBCRepository"}, "name":"[Lcom.zaxxer.hikari.util.ConcurrentBag$IConcurrentBagEntry;" }, +{ + "condition":{"typeReachable":"org.apache.shardingsphere.proxy.backend.connector.jdbc.datasource.JDBCBackendDataSource"}, + "name":"[Lcom.zaxxer.hikari.util.ConcurrentBag$IConcurrentBagEntry;" +}, { "condition":{"typeReachable":"org.apache.shardingsphere.proxy.frontend.postgresql.command.query.extended.Portal"}, "name":"[Lcom.zaxxer.hikari.util.ConcurrentBag$IConcurrentBagEntry;" @@ -2070,7 +2074,7 @@ "queryAllDeclaredMethods":true }, { - "condition":{"typeReachable":"org.apache.shardingsphere.mode.manager.cluster.listener.DatabaseMetaDataChangedListener$$Lambda/0x00007ffa47b22d00"}, + "condition":{"typeReachable":"org.apache.shardingsphere.mode.manager.cluster.listener.DatabaseMetaDataChangedListener$$Lambda/0x00007f619fb277d8"}, "name":"org.apache.shardingsphere.mode.manager.cluster.event.subscriber.dispatch.MetaDataChangedSubscriber" }, { @@ -3319,6 +3323,11 @@ "name":"org.apache.shardingsphere.sql.parser.statement.clickhouse.dml.ClickHouseSelectStatement", "methods":[{"name":"","parameterTypes":[] }] }, +{ + "condition":{"typeReachable":"org.apache.shardingsphere.driver.jdbc.core.statement.ShardingSpherePreparedStatement"}, + "name":"org.apache.shardingsphere.sql.parser.statement.hive.dml.HiveInsertStatement", + "methods":[{"name":"","parameterTypes":[] }] +}, { "condition":{"typeReachable":"org.apache.shardingsphere.driver.jdbc.core.statement.ShardingSpherePreparedStatement"}, "name":"org.apache.shardingsphere.sql.parser.statement.hive.dml.HiveSelectStatement", diff --git a/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/shardingsphere-infra-reachability-metadata/reflect-config.json b/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/shardingsphere-infra-reachability-metadata/reflect-config.json index fbbf20840062cf..46d056e7c417fa 100644 --- a/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/shardingsphere-infra-reachability-metadata/reflect-config.json +++ b/infra/reachability-metadata/src/main/resources/META-INF/native-image/org.apache.shardingsphere/shardingsphere-infra-reachability-metadata/reflect-config.json @@ -274,11 +274,6 @@ "name":"org.apache.shardingsphere.sql.parser.hive.visitor.statement.type.HiveDMLStatementVisitor", "methods":[{"name":"","parameterTypes":[] }] }, -{ - "condition":{"typeReachable":"org.apache.shardingsphere.sql.parser.statement.hive.dml.HiveInsertStatement"}, - "name":"org.apache.shardingsphere.sql.parser.statement.hive.dml.HiveInsertStatement", - "methods":[{"name":"","parameterTypes":[] }] -}, { "condition":{"typeReachable":"org.apache.shardingsphere.infra.binder.engine.statement.dml.DeleteStatementBinder"}, "name":"org.apache.shardingsphere.sql.parser.statement.hive.dml.HiveDeleteStatement", diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/TestShardingService.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/TestShardingService.java index 33b8288a7dc78b..d27117cebbf240 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/TestShardingService.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/TestShardingService.java @@ -138,6 +138,7 @@ public void processSuccessInClickHouse() throws SQLException { public void processSuccessInHive() throws SQLException { insertDataInHive(); deleteDataInHive(); + assertThat(orderRepository.selectAll(), equalTo(Collections.emptyList())); assertThat(addressRepository.selectAll(), equalTo(Collections.emptyList())); } @@ -175,16 +176,26 @@ public Collection insertData(final int autoGeneratedKeys) throws SQLExcept /** * Insert data in Hive. + * {@link org.apache.hive.jdbc.HiveStatement} does not implement {@link java.sql.Statement#getGeneratedKeys()}, + * so the snowflake ID generated by ShardingSphere cannot be obtained. */ public void insertDataInHive() { - LongStream.range(1L, 11L).forEach(action -> { - Address address = new Address(action, "address_test_" + action); - try { - addressRepository.insert(address); - } catch (final SQLException ex) { - throw new RuntimeException(ex); - } - }); + IntStream.range(1, 11).forEach(this::insertSingleDataInHive); + } + + private void insertSingleDataInHive(final int action) { + Order order = new Order(); + order.setUserId(action); + order.setOrderType(action % 2); + order.setAddressId(action); + order.setStatus("INSERT_TEST"); + Address address = new Address((long) action, "address_test_" + action); + try { + orderRepository.insertInHive(order); + addressRepository.insert(address); + } catch (final SQLException ex) { + throw new RuntimeException(ex); + } } /** @@ -219,14 +230,16 @@ public void deleteDataInClickHouse(final Collection orderIds) throws SQLEx /** * Delete data in Hive. - * - * @throws SQLException An exception that provides information on a database access error or other errors. */ - public void deleteDataInHive() throws SQLException { - long count = 1L; - for (int i = 1; i <= 10; i++) { - addressRepository.delete(count++); - } + public void deleteDataInHive() { + LongStream.range(1, 11).forEach(action -> { + try { + orderRepository.delete(action); + addressRepository.delete(action); + } catch (final SQLException exception) { + throw new RuntimeException(exception); + } + }); } /** diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/repository/OrderRepository.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/repository/OrderRepository.java index 362508606b3436..d4f28c41c0a50c 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/repository/OrderRepository.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/commons/repository/OrderRepository.java @@ -269,6 +269,7 @@ public Long insert(final Order order) throws SQLException { * @return orderId of the insert statement * @throws SQLException SQL Exception */ + @SuppressWarnings("MagicConstant") public Long insert(final Order order, final int autoGeneratedKeys) throws SQLException { String sql = "INSERT INTO t_order (user_id, order_type, address_id, status) VALUES (?, ?, ?, ?)"; try ( @@ -288,6 +289,27 @@ public Long insert(final Order order, final int autoGeneratedKeys) throws SQLExc return order.getOrderId(); } + /** + * insert Order to table in HiveServer2. + * {@link org.apache.hive.jdbc.HiveStatement} does not implement {@link java.sql.Statement#getGeneratedKeys()}, + * so the snowflake ID generated by ShardingSphere cannot be obtained. + * + * @param order order + * @throws SQLException SQL Exception + */ + public void insertInHive(final Order order) throws SQLException { + String sql = "INSERT INTO t_order (user_id, order_type, address_id, status) VALUES (?, ?, ?, ?)"; + try ( + Connection connection = dataSource.getConnection(); + PreparedStatement preparedStatement = connection.prepareStatement(sql, Statement.NO_GENERATED_KEYS)) { + preparedStatement.setInt(1, order.getUserId()); + preparedStatement.setInt(2, order.getOrderType()); + preparedStatement.setLong(3, order.getAddressId()); + preparedStatement.setString(4, order.getStatus()); + preparedStatement.executeUpdate(); + } + } + /** * delete by orderId. * diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/ClickHouseTest.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/ClickHouseTest.java index 7bbbfd506052c8..fab369ddf420b9 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/ClickHouseTest.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/ClickHouseTest.java @@ -53,7 +53,7 @@ class ClickHouseTest { @Container - public static final ClickHouseContainer CONTAINER = new ClickHouseContainer("clickhouse/clickhouse-server:24.6.2.17"); + public static final ClickHouseContainer CONTAINER = new ClickHouseContainer("clickhouse/clickhouse-server:24.10.2.80"); private static final String SYSTEM_PROP_KEY_PREFIX = "fixture.test-native.yaml.database.clickhouse."; diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/HiveTest.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/HiveTest.java index 3ba80abe935e24..d1a2b01cb5dd58 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/HiveTest.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/HiveTest.java @@ -28,7 +28,6 @@ import org.testcontainers.containers.GenericContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; -import org.testcontainers.utility.DockerImageName; import javax.sql.DataSource; import java.nio.file.Paths; @@ -50,7 +49,7 @@ class HiveTest { @SuppressWarnings("resource") @Container - public static final GenericContainer CONTAINER = new GenericContainer<>(DockerImageName.parse("apache/hive:4.0.1")) + public static final GenericContainer CONTAINER = new GenericContainer<>("apache/hive:4.0.1") .withEnv("SERVICE_NAME", "hiveserver2") .withExposedPorts(10000, 10002); diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/MySQLTest.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/MySQLTest.java index c645301f6bec5c..33c9912e817882 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/MySQLTest.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/MySQLTest.java @@ -29,7 +29,6 @@ import org.testcontainers.containers.GenericContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; -import org.testcontainers.utility.DockerImageName; import javax.sql.DataSource; import java.sql.Connection; @@ -61,7 +60,7 @@ class MySQLTest { @SuppressWarnings("resource") @Container - public static final GenericContainer CONTAINER = new GenericContainer<>(DockerImageName.parse("mysql:9.0.1-oraclelinux9")) + public static final GenericContainer CONTAINER = new GenericContainer<>("mysql:9.1.0-oraclelinux9") .withEnv("MYSQL_DATABASE", DATABASE) .withEnv("MYSQL_ROOT_PASSWORD", PASSWORD) .withExposedPorts(3306); diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/OpenGaussTest.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/OpenGaussTest.java index 96a7dba4326935..cd818a9014551c 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/OpenGaussTest.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/databases/OpenGaussTest.java @@ -28,7 +28,6 @@ import org.testcontainers.containers.GenericContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; -import org.testcontainers.utility.DockerImageName; import javax.sql.DataSource; import java.sql.Connection; @@ -56,7 +55,7 @@ class OpenGaussTest { @SuppressWarnings("resource") @Container - public static final GenericContainer CONTAINER = new GenericContainer<>(DockerImageName.parse("opengauss/opengauss:5.0.0")) + public static final GenericContainer CONTAINER = new GenericContainer<>("opengauss/opengauss:5.0.0") .withEnv("GS_PASSWORD", PASSWORD) .withExposedPorts(5432); diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/transactions/base/SeataTest.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/transactions/base/SeataTest.java index 125d7a2993909d..92969e56d9dac9 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/transactions/base/SeataTest.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/jdbc/transactions/base/SeataTest.java @@ -19,6 +19,7 @@ import com.zaxxer.hikari.HikariConfig; import com.zaxxer.hikari.HikariDataSource; +import org.apache.hc.core5.http.HttpStatus; import org.apache.shardingsphere.test.natived.commons.TestShardingService; import org.junit.jupiter.api.AfterAll; import org.junit.jupiter.api.BeforeAll; @@ -44,7 +45,10 @@ class SeataTest { @Container public static final GenericContainer CONTAINER = new GenericContainer<>("apache/seata-server:2.1.0") .withExposedPorts(7091, 8091) - .waitingFor(Wait.forHttp("/health").forPort(7091).forResponsePredicate("ok"::equals)); + .waitingFor(Wait.forHttp("/health") + .forPort(7091) + .forStatusCode(HttpStatus.SC_OK) + .forResponsePredicate("ok"::equals)); private static final String SERVICE_DEFAULT_GROUP_LIST_KEY = "service.default.grouplist"; diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/databases/PostgresTest.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/databases/PostgresTest.java index 8d4abbeb70b0bd..cc1a1625612c1d 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/databases/PostgresTest.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/databases/PostgresTest.java @@ -45,7 +45,7 @@ class PostgresTest { @Container - public static final GenericContainer POSTGRES_CONTAINER = new GenericContainer<>("postgres:16.3-bookworm") + public static final GenericContainer POSTGRES_CONTAINER = new GenericContainer<>("postgres:17.1-bookworm") .withEnv("POSTGRES_PASSWORD", "yourStrongPassword123!") .withExposedPorts(5432); diff --git a/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/features/ShardingTest.java b/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/features/ShardingTest.java index cb35804e4e726d..8c993bfb917524 100644 --- a/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/features/ShardingTest.java +++ b/test/native/src/test/java/org/apache/shardingsphere/test/natived/proxy/features/ShardingTest.java @@ -46,7 +46,7 @@ class ShardingTest { @Container - public static final GenericContainer MYSQL_CONTAINER = new GenericContainer<>("mysql:9.0.1-oraclelinux9") + public static final GenericContainer MYSQL_CONTAINER = new GenericContainer<>("mysql:9.1.0-oraclelinux9") .withEnv("MYSQL_ROOT_PASSWORD", "yourStrongPassword123!") .withExposedPorts(3306); diff --git a/test/native/src/test/resources/container-license-acceptance.txt b/test/native/src/test/resources/container-license-acceptance.txt index 5ae6ecbd4d1cc3..8ece678d2db12d 100644 --- a/test/native/src/test/resources/container-license-acceptance.txt +++ b/test/native/src/test/resources/container-license-acceptance.txt @@ -1 +1 @@ -mcr.microsoft.com/mssql/server:2022-CU14-ubuntu-22.04 +mcr.microsoft.com/mssql/server:2022-CU16-ubuntu-22.04 diff --git a/test/native/src/test/resources/test-native/yaml/jdbc/databases/postgresql.yaml b/test/native/src/test/resources/test-native/yaml/jdbc/databases/postgresql.yaml index 276988e10b6f4e..80ca30c249d55f 100644 --- a/test/native/src/test/resources/test-native/yaml/jdbc/databases/postgresql.yaml +++ b/test/native/src/test/resources/test-native/yaml/jdbc/databases/postgresql.yaml @@ -24,15 +24,15 @@ dataSources: ds_0: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:postgresql:16.3-bookworm://test-native-databases-postgres/demo_ds_0 + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-native-databases-postgres/demo_ds_0 ds_1: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:postgresql:16.3-bookworm://test-native-databases-postgres/demo_ds_1 + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-native-databases-postgres/demo_ds_1 ds_2: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:postgresql:16.3-bookworm://test-native-databases-postgres/demo_ds_2 + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-native-databases-postgres/demo_ds_2 rules: - !SHARDING diff --git a/test/native/src/test/resources/test-native/yaml/jdbc/databases/sqlserver.yaml b/test/native/src/test/resources/test-native/yaml/jdbc/databases/sqlserver.yaml index c80a89f6774ce3..3cc6b019680097 100644 --- a/test/native/src/test/resources/test-native/yaml/jdbc/databases/sqlserver.yaml +++ b/test/native/src/test/resources/test-native/yaml/jdbc/databases/sqlserver.yaml @@ -24,15 +24,15 @@ dataSources: ds_0: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:sqlserver:2022-CU14-ubuntu-22.04://test-native-databases-mssqlserver;databaseName=demo_ds_0; + jdbcUrl: jdbc:tc:sqlserver:2022-CU16-ubuntu-22.04://test-native-databases-mssqlserver;databaseName=demo_ds_0; ds_1: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:sqlserver:2022-CU14-ubuntu-22.04://test-native-databases-mssqlserver;databaseName=demo_ds_1; + jdbcUrl: jdbc:tc:sqlserver:2022-CU16-ubuntu-22.04://test-native-databases-mssqlserver;databaseName=demo_ds_1; ds_2: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:sqlserver:2022-CU14-ubuntu-22.04://test-native-databases-mssqlserver;databaseName=demo_ds_2; + jdbcUrl: jdbc:tc:sqlserver:2022-CU16-ubuntu-22.04://test-native-databases-mssqlserver;databaseName=demo_ds_2; rules: - !SHARDING diff --git a/test/native/src/test/resources/test-native/yaml/jdbc/transactions/base/seata.yaml b/test/native/src/test/resources/test-native/yaml/jdbc/transactions/base/seata.yaml index 89bfe34eb17bc4..951823bf0cb87d 100644 --- a/test/native/src/test/resources/test-native/yaml/jdbc/transactions/base/seata.yaml +++ b/test/native/src/test/resources/test-native/yaml/jdbc/transactions/base/seata.yaml @@ -24,15 +24,15 @@ dataSources: ds_0: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:postgresql:16.3-bookworm://test-native-transactions-base/demo_ds_0?TC_INITSCRIPT=test-native/sql/seata-script-client-at-postgresql.sql + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-native-transactions-base/demo_ds_0?TC_INITSCRIPT=test-native/sql/seata-script-client-at-postgresql.sql ds_1: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:postgresql:16.3-bookworm://test-native-transactions-base/demo_ds_1?TC_INITSCRIPT=test-native/sql/seata-script-client-at-postgresql.sql + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-native-transactions-base/demo_ds_1?TC_INITSCRIPT=test-native/sql/seata-script-client-at-postgresql.sql ds_2: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver - jdbcUrl: jdbc:tc:postgresql:16.3-bookworm://test-native-transactions-base/demo_ds_2?TC_INITSCRIPT=test-native/sql/seata-script-client-at-postgresql.sql + jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-native-transactions-base/demo_ds_2?TC_INITSCRIPT=test-native/sql/seata-script-client-at-postgresql.sql rules: - !SHARDING