You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Spark any object can be used as column type. While the build-in column types have ordered domains and thus can be bounded through ranges, this is not the case for some arbitrary UDTs. We should fall back to boolean attribute-level annotations for such data types. This requires changes to CaveatRangeExpression to deal with both types as well as code that assumes to know the structure of annotations as in CaveatRangePlan and CaveatRangeEncoding.
The text was updated successfully, but these errors were encountered:
In Spark any object can be used as column type. While the build-in column types have ordered domains and thus can be bounded through ranges, this is not the case for some arbitrary UDTs. We should fall back to boolean attribute-level annotations for such data types. This requires changes to
CaveatRangeExpression
to deal with both types as well as code that assumes to know the structure of annotations as inCaveatRangePlan
andCaveatRangeEncoding
.The text was updated successfully, but these errors were encountered: