Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/Issue#32280_bugfix_on_conflict_f…
Browse files Browse the repository at this point in the history
…or_postgres' into Issue#32280_bugfix_on_conflict_for_postgres
  • Loading branch information
omkar-shitole committed Jan 7, 2025
2 parents 426e9bf + af0a37e commit fee7feb
Show file tree
Hide file tree
Showing 273 changed files with 4,097 additions and 6,075 deletions.
4 changes: 4 additions & 0 deletions RELEASE-NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@
1. Sharding: Support GroupConcat function for aggregating multiple shards in MySQL, OpenGauss, Doris - [#33808](https://github.com/apache/shardingsphere/pull/33808)
1. Agent: Simplify the use of Agent's Docker Image - [#33356](https://github.com/apache/shardingsphere/pull/33356)
1. Mode: Support modifying Hikari-CP configurations via props in standalone mode [#34185](https://github.com/apache/shardingsphere/pull/34185)
1. Encrypt: Support insert statement rewrite use quote [#34259](https://github.com/apache/shardingsphere/pull/34259)
1. SQL Binder: Support optimize table sql bind and add test case - [#34242](https://github.com/apache/shardingsphere/pull/34242)
1. SQL Binder: Support show create table, show columns, show index statement bind - [#34271](https://github.com/apache/shardingsphere/pull/34271)

### Bug Fixes

Expand Down Expand Up @@ -142,6 +145,7 @@
1. Pipeline: Use case-insensitive identifiers to enhance the table metadata loader
1. Pipeline: Support primary key columns ordering for standard pipeline table metadata loader
1. Sharding: Optimize sharding table index name rewriting rules and remove unnecessary suffix rewriting - [#31171](https://github.com/apache/shardingsphere/issues/31171)
1. Metadata: Support postgresql and opengauss CHARACTER VARYING type metadata load - [#34221](https://github.com/apache/shardingsphere/pull/34221)

### Bug Fixes

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ After performing the same access test as before, we can view dependencies throug

## Sampling Rate

The Observability plugin also enables users to set differnt sampling rate configured to suit different scenarios. Zipkin plugins support various sampling rate type configurations including const, counting, rate limiting, and boundary.
The Observability plugin also enables users to set different sampling rate configured to suit different scenarios. Zipkin plugins support various sampling rate type configurations including const, counting, rate limiting, and boundary.

For scenarios with a high volume of requests, we suggest you to choose the boundary type and configure it with the appropriate sampling rate to reduce the collect volume of tracing data.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ This means it can effectively solve problems caused by increasing data volume in
## Conclusion
Apache ShardingSphere and openGauss can seek potential cooperation opportunities.

Considering the increasingly diversified applicaiton scenarios and increasing data volume, the requirements for database performance are at an all time high and will only continue to increase in the future.
Considering the increasingly diversified application scenarios and increasing data volume, the requirements for database performance are at an all time high and will only continue to increase in the future.

The success of our two communities cooperation is just the beginning of ourtwo communities building a collaborative database ecosystem.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ Apache ShardingSphere currently only supports federated queries between homogene
## Conclusion
The Apache ShardingSphere community has been active in open source for 7 years. Through perseverance, the community has become mature we’d like to extend our sincere welcome to any devs or contributors who are enthusiastic about open source and coding to collaborate with us.

Among our recent achievements we’re particulary proud of, Apache ShardingSphere’s pluggable architecture and data sharding philosophy have been recognized by the academic community. [The paper, Apache ShardingSphere: A Holistic and Pluggable Platform for Data Sharding, has been published at this year’s ICDE, a top conference in the database field.](https://faun.pub/a-holistic-pluggable-platform-for-data-sharding-icde-2022-understanding-apache-shardingsphere-55779cfde16)
Among our recent achievements we’re particularly proud of, Apache ShardingSphere’s pluggable architecture and data sharding philosophy have been recognized by the academic community. [The paper, Apache ShardingSphere: A Holistic and Pluggable Platform for Data Sharding, has been published at this year’s ICDE, a top conference in the database field.](https://faun.pub/a-holistic-pluggable-platform-for-data-sharding-icde-2022-understanding-apache-shardingsphere-55779cfde16)

## Author

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Database Mesh 2.0 focuses on how to achieve the following goals in a cloud nativ

> **Developer experience**
As mentioned above, business developers are mainly concerned about business logic and implementation instead of infrastructure, operation and maintenance features. Developement experience will move towards [Serverless](https://www.redhat.com/en/topics/cloud-native-apps/what-is-serverless), which means it will become more and more transparent and intuitive when accessing databases. Developers only need to understand the type of data storage required by their business, and then use preset or dynamic ID credential information to access corresponding database services.
As mentioned above, business developers are mainly concerned about business logic and implementation instead of infrastructure, operation and maintenance features. Development experience will move towards [Serverless](https://www.redhat.com/en/topics/cloud-native-apps/what-is-serverless), which means it will become more and more transparent and intuitive when accessing databases. Developers only need to understand the type of data storage required by their business, and then use preset or dynamic ID credential information to access corresponding database services.

> **Programmable**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ The data generated from test user order creation will be routed to shadow databa

As mentioned in the introduction, full-link online stress testing is a complicated task that requires collaboration between microservices and middlewares to meet the needs of different traffic and stress testing tag transmissions.

Additonally, the testing service should be stateless and immediately available. [CyborgFlow](https://github.com/SphereEx/CyborgFlow), which is jointly maintained by Apache ShardingSphere, Apache APISIX and Apache SkyWalking provides out-of-the-box (OoTB) solution to run load test in your online system.
Additionally, the testing service should be stateless and immediately available. [CyborgFlow](https://github.com/SphereEx/CyborgFlow), which is jointly maintained by Apache ShardingSphere, Apache APISIX and Apache SkyWalking provides out-of-the-box (OoTB) solution to run load test in your online system.

[Apache APISIX](https://apisix.apache.org/) is responsible for making tags on testing data at the gateway layer, while [Apache SkyWalking](https://skywalking.apache.org/) is responsible for transmission through the whole scheduling link, and finally, Apache ShardingSphere-Proxy will isolate data and route testing data to the shadow database.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ Input command

Output

a. If sucessful, show "Query OK, 0 rows affected";
a. If successful, show "Query OK, 0 rows affected";
b. Re-execute `show readwrite_splitting hint status`; show the ource is changed into Write;
c. Execute `preview select * from t_order`and see the queried SQL will go to the master database.

Expand Down
2 changes: 1 addition & 1 deletion docs/document/content/overview/_index.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ ShardingSphere offers a flat learning curve to DBAs and is interaction-friendly

It can provide enhancement capability based on mature databases while ensuring security and stability.

- Elastic Extention
- Elastic Extension

It supports computing, storage, and smooth online expansion, which can meet diverse business needs.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
import org.apache.shardingsphere.infra.rewrite.sql.token.common.generator.builder.SQLTokenGeneratorBuilder;
import org.apache.shardingsphere.infra.route.context.RouteContext;
import org.apache.shardingsphere.sql.parser.statement.core.segment.dml.predicate.WhereSegment;
import org.apache.shardingsphere.sql.parser.statement.core.segment.generic.table.SimpleTableSegment;

import java.util.Collection;
import java.util.Collections;
Expand Down Expand Up @@ -68,8 +69,8 @@ private boolean containsEncryptTable(final EncryptRule rule, final SQLStatementC
if (!(sqlStatementContext instanceof TableAvailable)) {
return false;
}
for (String each : ((TableAvailable) sqlStatementContext).getTablesContext().getTableNames()) {
if (rule.findEncryptTable(each).isPresent()) {
for (SimpleTableSegment each : ((TableAvailable) sqlStatementContext).getTablesContext().getSimpleTables()) {
if (rule.findEncryptTable(each.getTableName().getIdentifier().getValue()).isPresent()) {
return true;
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,15 +84,17 @@ private Optional<EncryptAssignmentToken> generateSQLToken(final String schemaNam
}

private EncryptAssignmentToken generateParameterSQLToken(final EncryptColumn encryptColumn, final ColumnAssignmentSegment segment) {
EncryptParameterAssignmentToken result = new EncryptParameterAssignmentToken(segment.getColumns().get(0).getStartIndex(), segment.getStopIndex());
EncryptParameterAssignmentToken result =
new EncryptParameterAssignmentToken(segment.getColumns().get(0).getStartIndex(), segment.getStopIndex(), segment.getColumns().get(0).getIdentifier().getQuoteCharacter());
result.addColumnName(encryptColumn.getCipher().getName());
encryptColumn.getAssistedQuery().ifPresent(optional -> result.addColumnName(optional.getName()));
encryptColumn.getLikeQuery().ifPresent(optional -> result.addColumnName(optional.getName()));
return result;
}

private EncryptAssignmentToken generateLiteralSQLToken(final String schemaName, final String tableName, final EncryptColumn encryptColumn, final ColumnAssignmentSegment segment) {
EncryptLiteralAssignmentToken result = new EncryptLiteralAssignmentToken(segment.getColumns().get(0).getStartIndex(), segment.getStopIndex());
EncryptLiteralAssignmentToken result =
new EncryptLiteralAssignmentToken(segment.getColumns().get(0).getStartIndex(), segment.getStopIndex(), segment.getColumns().get(0).getIdentifier().getQuoteCharacter());
addCipherAssignment(schemaName, tableName, encryptColumn, segment, result);
addAssistedQueryAssignment(schemaName, tableName, encryptColumn, segment, result);
addLikeAssignment(schemaName, tableName, encryptColumn, segment, result);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
import org.apache.shardingsphere.infra.rewrite.sql.token.common.pojo.generic.SubstitutableColumnNameToken;
import org.apache.shardingsphere.sql.parser.statement.core.segment.dml.column.ColumnSegment;
import org.apache.shardingsphere.sql.parser.statement.core.segment.dml.column.InsertColumnsSegment;
import org.apache.shardingsphere.sql.parser.statement.core.value.identifier.IdentifierValue;

import java.util.Collection;
import java.util.Collections;
Expand Down Expand Up @@ -75,7 +76,8 @@ public Collection<SQLToken> generateSQLTokens(final InsertStatementContext inser
String columnName = each.getIdentifier().getValue();
if (encryptTable.isEncryptColumn(columnName)) {
Collection<Projection> projections =
Collections.singleton(new ColumnProjection(null, encryptTable.getEncryptColumn(columnName).getCipher().getName(), null, insertStatementContext.getDatabaseType()));
Collections.singleton(new ColumnProjection(null, new IdentifierValue(encryptTable.getEncryptColumn(columnName).getCipher().getName(), each.getIdentifier().getQuoteCharacter()),
null, insertStatementContext.getDatabaseType()));
result.add(new SubstitutableColumnNameToken(each.getStartIndex(), each.getStopIndex(), projections, insertStatementContext.getDatabaseType()));
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@
import org.apache.shardingsphere.infra.binder.context.segment.select.projection.Projection;
import org.apache.shardingsphere.infra.binder.context.statement.SQLStatementContext;
import org.apache.shardingsphere.infra.binder.context.statement.dml.InsertStatementContext;
import org.apache.shardingsphere.infra.database.core.metadata.database.enums.QuoteCharacter;
import org.apache.shardingsphere.infra.database.core.type.DatabaseTypeRegistry;
import org.apache.shardingsphere.infra.exception.core.ShardingSpherePreconditions;
import org.apache.shardingsphere.infra.exception.generic.UnsupportedSQLOperationException;
import org.apache.shardingsphere.infra.rewrite.sql.token.common.generator.OptionalSQLTokenGenerator;
Expand Down Expand Up @@ -96,8 +98,9 @@ private UseDefaultInsertColumnsToken generateNewSQLToken(final InsertStatementCo
ShardingSpherePreconditions.checkState(InsertSelectColumnsEncryptorComparator.isSame(derivedInsertColumns, projections, rule),
() -> new UnsupportedSQLOperationException("Can not use different encryptor in insert select columns"));
}
QuoteCharacter quoteCharacter = new DatabaseTypeRegistry(insertStatementContext.getDatabaseType()).getDialectDatabaseMetaData().getQuoteCharacter();
return new UseDefaultInsertColumnsToken(
insertColumnsSegment.get().getStopIndex(), getColumnNames(insertStatementContext, rule.getEncryptTable(tableName), insertStatementContext.getColumnNames()));
insertColumnsSegment.get().getStopIndex(), getColumnNames(insertStatementContext, rule.getEncryptTable(tableName), insertStatementContext.getColumnNames()), quoteCharacter);
}

private List<String> getColumnNames(final InsertStatementContext sqlStatementContext, final EncryptTable encryptTable, final List<String> currentColumnNames) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ public Collection<SQLToken> generateSQLTokens(final InsertStatementContext inser
for (ColumnSegment each : insertStatementContext.getSqlStatement().getColumns()) {
List<String> derivedColumnNames = getDerivedColumnNames(encryptTable, each);
if (!derivedColumnNames.isEmpty()) {
result.add(new InsertColumnsToken(each.getStopIndex() + 1, derivedColumnNames));
result.add(new InsertColumnsToken(each.getStopIndex() + 1, derivedColumnNames, each.getIdentifier().getQuoteCharacter()));
}
}
return result;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,8 @@ private Optional<EncryptAssignmentToken> generateSQLToken(final String schemaNam
}

private EncryptAssignmentToken generateParameterSQLToken(final EncryptTable encryptTable, final ColumnAssignmentSegment assignmentSegment) {
EncryptParameterAssignmentToken result = new EncryptParameterAssignmentToken(assignmentSegment.getColumns().get(0).getStartIndex(), assignmentSegment.getStopIndex());
EncryptParameterAssignmentToken result = new EncryptParameterAssignmentToken(assignmentSegment.getColumns().get(0).getStartIndex(), assignmentSegment.getStopIndex(),
assignmentSegment.getColumns().get(0).getIdentifier().getQuoteCharacter());
String columnName = assignmentSegment.getColumns().get(0).getIdentifier().getValue();
EncryptColumn encryptColumn = encryptTable.getEncryptColumn(columnName);
result.addColumnName(encryptColumn.getCipher().getName());
Expand All @@ -126,7 +127,8 @@ private EncryptAssignmentToken generateParameterSQLToken(final EncryptTable encr

private EncryptAssignmentToken generateLiteralSQLToken(final String schemaName, final String tableName,
final EncryptColumn encryptColumn, final ColumnAssignmentSegment assignmentSegment) {
EncryptLiteralAssignmentToken result = new EncryptLiteralAssignmentToken(assignmentSegment.getColumns().get(0).getStartIndex(), assignmentSegment.getStopIndex());
EncryptLiteralAssignmentToken result = new EncryptLiteralAssignmentToken(assignmentSegment.getColumns().get(0).getStartIndex(), assignmentSegment.getStopIndex(),
assignmentSegment.getColumns().get(0).getIdentifier().getQuoteCharacter());
addCipherAssignment(schemaName, tableName, encryptColumn, assignmentSegment, result);
addAssistedQueryAssignment(schemaName, tableName, encryptColumn, assignmentSegment, result);
addLikeAssignment(schemaName, tableName, encryptColumn, assignmentSegment, result);
Expand All @@ -139,7 +141,8 @@ private EncryptAssignmentToken generateValuesSQLToken(final EncryptTable encrypt
Optional<ExpressionSegment> valueColumnSegment = functionSegment.getParameters().stream().findFirst();
Preconditions.checkState(valueColumnSegment.isPresent());
String valueColumn = ((ColumnSegment) valueColumnSegment.get()).getIdentifier().getValue();
EncryptFunctionAssignmentToken result = new EncryptFunctionAssignmentToken(columnSegment.getStartIndex(), assignmentSegment.getStopIndex());
EncryptFunctionAssignmentToken result =
new EncryptFunctionAssignmentToken(columnSegment.getStartIndex(), assignmentSegment.getStopIndex(), assignmentSegment.getColumns().get(0).getIdentifier().getQuoteCharacter());
boolean isEncryptColumn = encryptTable.isEncryptColumn(column);
boolean isEncryptValueColumn = encryptTable.isEncryptColumn(valueColumn);
EncryptColumn encryptColumn = encryptTable.getEncryptColumn(column);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
package org.apache.shardingsphere.encrypt.rewrite.token.pojo;

import lombok.Getter;
import org.apache.shardingsphere.infra.database.core.metadata.database.enums.QuoteCharacter;
import org.apache.shardingsphere.infra.rewrite.sql.token.common.pojo.SQLToken;
import org.apache.shardingsphere.infra.rewrite.sql.token.common.pojo.Substitutable;

Expand All @@ -29,8 +30,11 @@ public abstract class EncryptAssignmentToken extends SQLToken implements Substit

private final int stopIndex;

protected EncryptAssignmentToken(final int startIndex, final int stopIndex) {
private final QuoteCharacter quoteCharacter;

protected EncryptAssignmentToken(final int startIndex, final int stopIndex, final QuoteCharacter quoteCharacter) {
super(startIndex);
this.stopIndex = stopIndex;
this.quoteCharacter = quoteCharacter;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
package org.apache.shardingsphere.encrypt.rewrite.token.pojo;

import lombok.RequiredArgsConstructor;
import org.apache.shardingsphere.infra.database.core.metadata.database.enums.QuoteCharacter;

import java.util.Collection;
import java.util.LinkedList;
Expand All @@ -31,8 +32,8 @@ public final class EncryptFunctionAssignmentToken extends EncryptAssignmentToken

private final Collection<FunctionAssignment> assignments = new LinkedList<>();

public EncryptFunctionAssignmentToken(final int startIndex, final int stopIndex) {
super(startIndex, stopIndex);
public EncryptFunctionAssignmentToken(final int startIndex, final int stopIndex, final QuoteCharacter quoteCharacter) {
super(startIndex, stopIndex, quoteCharacter);
}

/**
Expand All @@ -42,7 +43,7 @@ public EncryptFunctionAssignmentToken(final int startIndex, final int stopIndex)
* @param value assignment value
*/
public void addAssignment(final String columnName, final Object value) {
FunctionAssignment functionAssignment = new FunctionAssignment(columnName, value);
FunctionAssignment functionAssignment = new FunctionAssignment(columnName, value, getQuoteCharacter());
assignments.add(functionAssignment);
builder.append(functionAssignment).append(", ");
}
Expand All @@ -68,9 +69,11 @@ private static final class FunctionAssignment {

private final Object value;

private final QuoteCharacter quoteCharacter;

@Override
public String toString() {
return String.format("%s = %s", columnName, value);
return quoteCharacter.wrap(columnName) + " = " + value;
}
}
}
Loading

0 comments on commit fee7feb

Please sign in to comment.