Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The logic for converting values for writing is confusing and I think very wrong. #1136

Open
schauder opened this issue Jan 20, 2022 · 7 comments · May be fixed by #1517
Open

The logic for converting values for writing is confusing and I think very wrong. #1136

schauder opened this issue Jan 20, 2022 · 7 comments · May be fixed by #1517
Labels
in: mapping Mapping and conversion infrastructure

Comments

@schauder
Copy link
Contributor

schauder commented Jan 20, 2022

This became obvious when analysing #1127
Part of this problem is hopefully fixed by a fix for #1089.

We have multiple checks for determining what datatype a value should get converted to and I don't think they add to something that I understand.

BasicRelationalConverter.writeValue(@Nullable Object value, TypeInformation<?> type) converts the value twice when it is a simple type and only once otherwise.
At the very least this should be refactored so I can understand why it is doing that.

Concept for a rewrite of the writing conversion logic.

The goal is to have a clean and conceptual consistent process for converting Java values to database values.

The process should look like this:

  1. Identify the conversion target
    The conversion target is a Java type + a java.sql.SQLType (typically a java.sql.JDBCType)

  2. perform the conversion.

The process for identifying the conversion target should follow this process:

  • If it is an entity we don't convert it.

  • If it is collection-like, the target is the same kind of collection. We run the same process to determine the elements of the collection.

  • If there is a matching WritingConverter it determines the target.

  • If the incoming type in question is a simple type, the target type is just that type.

  • Otherwise we apply the standard conversions (e.g. Enums get converted to Strings ... we need to inspect what is currently in the source code).
    I wonder if we can identify the conversion target without actually applying it in this step.

  • there is special handling for arrays ... we need to slot them in somehow. I currently don't think they need special consideration in the conversion process, but I might be wrong about this.

  • Once the target type is determined the SQLType gets derived from that.
    This is either a lookup from JdbcUtil or it is part of the conversion target if it is JdbcUtil.
    I guess that means we don't have a SQLType, but a Function<Object, SQLType.

Part of the challenge and why the current implementation is so convoluted is, that we a have different sets of information going into this process:

  1. for save operations we do have the property.
  2. for derived queries we do have the property.
  3. for annotated query methods we do have the method parameter.
  4. I'm not even sure what we do have for SpEL expressions.

The class structure should lean heavily on Cassandra, but with a combination of Java type + SQLType instead of just a Java type as the target.

https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/ColumnType.java
https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/ColumnTypeResolver.java
https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/DefaultColumnTypeResolver.java
https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/MappingCassandraConverter.java

Related issues

The following issues might either get fixed as a side effect of this issue or much easier to fix after this issue is resolved.

#787
#1023
#725
#629

@schauder
Copy link
Contributor Author

#629 is also cause by this

@schauder
Copy link
Contributor Author

The current process of BasicJdbcConverter for writing a value is:

  1. determine target type (T0) and sql-type
  2. convert to the target type if possible.
  3. check if there is a applicable writing custom conversion
  4. If so convert to the target type of that custom conversion (T1).

=> the sql-type determined in the beginning does not match the target type T1, resulting in whole bunch of errors.

Side problem: null values don't get converted by Spring Framework converters

Better process:

  1. check if there is a custom conversion
  2. If so apply it. Determine the SQL type from the conversion target.
  3. If there are no custom conversions use configured target types and SQL type and use the conversion service to convert to it.

@mipo256
Copy link
Contributor

mipo256 commented Jan 31, 2022

Jens @schauder , I think I can try to make it work so, I have already invested some time in order to understand the guts of this process, so I think it will make sence. May I take a look at it?

@schauder
Copy link
Contributor Author

schauder commented Feb 1, 2022

Hi Mikhail,

of course you may take a look!
But as a fair warning: This is a pretty central part of Spring Data JDBC and it is perfectly possible, that we have some ideas that we don't have properly documented yet, which could lead to a failure to accept an otherwise good PR.

If you are fine with that risk, please go ahead. And please let us know if you do so we can leave it to you at least for some time.

@mipo256
Copy link
Contributor

mipo256 commented Feb 2, 2022

Yes, I think I am perfectly fine, I will try to fix this issue :) If I there will be some arguable questions during implementation, I will try to reach you out :)

@schauder
Copy link
Contributor Author

Things also to consider:

  • collection like values
  • values passed as query parameters where we don't know the referred property (@Query annotation) vs those where we do (Query derivation).
  • arrays when the underlying database does or does not support arrays

@schauder
Copy link
Contributor Author

schauder commented Jul 5, 2024

#1828 is also related to this.

At least it helps me understand why there are two conversions. See #1829

@schauder schauder linked a pull request Oct 2, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in: mapping Mapping and conversion infrastructure
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants