-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vastly refactored property --> jdbc value mapping api #1517
base: main
Are you sure you want to change the base?
Conversation
@schauder I hope you will get a chance to look at it. This PR will allow us to proceed on other related issues like I mentioned, and allow other contributors to have already refactored code as baseline. |
Hi Jens @schauder ! Are you going to review it soon? |
@schauder @patricklucas Will this PR be considered? |
We are currently in the process of refactoring our converters with #1618. The recent changes have cleaned up our code by unifying several approaches into a single one. Looking forward, we want to investigate a conversion process to first transform all values into Do you want to revisit your pull request once #1618 is merged? |
@mp911de Yeah, sure, I will |
@mipo256 The issue mentioned above is resolved. So this would be a good point in time to revisit this PR. |
Was referring to Marks comment about waiting for #1618 before revisiting this issue. |
1bc7b82
to
eeeaea6
Compare
eeeaea6
to
aae3db2
Compare
@@ -89,7 +89,7 @@ public ConversionService getConversionService() { | |||
return conversionService; | |||
} | |||
|
|||
public CustomConversions getConversions() { | |||
public CustomConversions getCustomConversions() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a breaking change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Idk, I just renamed it for readability, to not confuse it in the code with ConversionService
. Do you think it should be rolled back, or we're going to introduce this pr in the next major release, so it can wait?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do not introduce breaking changes if you're looking for inclusion in a minor release.
@@ -152,6 +151,11 @@ public SQLType getTargetSqlType(RelationalPersistentProperty property) { | |||
return JdbcUtil.targetSqlTypeFor(getColumnType(property)); | |||
} | |||
|
|||
@Override | |||
public SQLType getTargetSqlType(Class<?> property) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not quite sure this is the right approach. Right below, there's a getColumnType
method that operates on the level of a property.
Conversion is supposed to take custom converters into account and depending on a database dialect, a SQLType might change.
I suppose we need a larger revision on the converter to ensure that we sketch out the possible cases for obtaining SQLType
(string-based query, in the context of a property) and define a path forward regarding necessary changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me break my change into several parts:
The reason I introduced this method is to optimize the target SQLType
search. The public getTargetSqlType(RelationalPersistentProperty<?> property)
is first resolving the Class
of the given property
, and then it searches the SQLType
for the given Class
. This method exists mainly if we already know, what Class
actually backs the property, so we do not need to do discover it over and over again.
Now, let me explain my thoughts on your suggestions. I generally agree with you, that certainly makes sense, but this PR is solving another problem, and this problem of eliminating the incorrect conversions that are applied at the wrong times. See the bugs linked. I think we should do the following - if we recognize, that there is a need to determine the SQLType
depending on the dialect (from some issue for instance), we're going to do so, but for now, I suggest leaving it as is.
What are your thoughts on this, @mp911de?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main concern is that we're continuing to add functionality in terms of putting patches around something that is not designed in the right way instead of fixing the core issue.
I think @schauder should decide how to proceed here.
Hello @schauder !
I have finally came up with the draft solution of #1136 issue. All test cases are running fine, and now it is far more clear, at least from my perspective. I will provide the definitive guide on how this mapping from property --> JdbcValue looks now. You should focus your attention at
method. This is where the new logic is implemented. Here is how it works:
If the provided value is
null
, then we just returnnull
wrapped withJdbcValue
withSQLType
as of original type of a value. Here is interesting thing - injava.sql.JDBCType
class there is aNULL
type, and almost all databases can accept this type, except DB2, who want the original type of the value to be present here, so that's why we have to passoriginalValueType
as parameter here, unfortunately.If it is not null, then we immediately apply the conversions defined by user or by us. This is important, since if user have defined the conversion from OffsetDateTime --> Timestamp, then he/she would expect that for field:
for instance the conversion will be applied, which was not the case since we have converted the value ourselves first. That is
I think very-very important for related issues Same type (java.sql.Timestamp) turns into different SQL-types #1089 and Loading a LocalDate results in timezones (wrongly) getting applied #1127.
If resulting form covnertion value is
null
(and it was notnull
, we checked it previously), then we understand, that either from our own, or from user convertion the value was explicitly returned asnull
, so this is the result we or user wants, so we returnnull
wrapped withJdbcValue
withSQLType
as of original type of a value (because of DB2 again).If converted value is of type
JDBCType
(or original value can be of that type as well) - we just assume, and from previous code this was the case, that such value is a final result. So if user, or we, inside the framework, as a result of conversion returnJdbcValue
, then no further logic applied - just return the resultingJdbcValue
.If the converted, or original value is of type
AggregateReference
- then we recursively trying to createJdbcValue
forAggregateReference#id
, that logic I have just borrowed fromwriteValue
method, because this seems to be correct.Then, we need to understand, have we applied conversion or not. This part I think I should explain in details.
If we have applied the conversion and we got any generic type, then it will not be possible to deduce this generic type in runtime, just due to type erasure in java, since variable is local. However, if the conversion was not applied and initially there were generic type, then we will be provided from outside with the initial generic type. This will give us more precise
JDBCType
in runtime. Otherwise we will just haveJDBCType.UNKNOWN
. For example, both now, and prior to my changes, if user will create a converter, that converts some value toSet<String>
for instance (I would say it is very rare case, since we do not even have a test case in the project for this scenario), then framework will try to create ARRAY SQL withtypeName
asUNKNOWN
, using this jdbc API:Is that a problem? Yes, it is. Some Jdbc drivers accept this as array type, but some do not. So, here, we are limited to java restrictions, at least for now... So I decided to pass original generic type into method (if applicable for given value of course) because of this. Goal is to at least overcome this case when converted was not applied, which is the most often scenario.
The rest part is almost the same that was - if we have a collection as value, we convert it into array into the of most precise type we can. Then if the value is array we create SQL ARRAY value from it, which we used to as well. If value is simple - then I use
JdbcUtil
class, as the previous code did as well.Please, let me know, what you think about it... I am sure there is a lot to discuss, but we need to refactor this, at least for our understanding...