Releases: datastax/spark-cassandra-connector
Releases · datastax/spark-cassandra-connector
Preview release 1.6.0-M2
1.6.0 M2
- Performance improvement: keyBy creates an RDD with CassandraPartitioner.
so shuffling can be avoided in many cases, e.g. when keyBy is followed
by groupByKey or join (SPARKC-330) - Improved exception message when data frame is to be saved in.
non-empty Cassandra table (SPARKC-338) - Support for Joda time for Cassandra date type (SPARKC-342)
- Don't double resolve the paths for port locks in embedded C*.
(contribution by crakjie) - Drop indices that cannot be used in predicate pushdown (SPARKC-347)
- Added support for IF NOT EXISTS (SPARKC-362)
- Nested Optional Case Class can be save as UDT (SPARKC-346)
- Merged Java API into main module (SPARKC-335)
- Upgraded to Spark 1.6.1 (SPARKC-344)
- Fix NoSuchElementException when fetching database schema from Cassandra
(SPARKC-341) - Removed the ability to specify cluster alias directly and added some helper methods.
which make it easier to configure Cassandra related data frames (SPARKC-289)
Preview release 1.6.0-M1
1.6.0 M1
- Adds the ability to add additional Predicate Pushdown Rules at Runtime (SPARKC-308).
- Added CassandraOption for Skipping Columns when Writing to C* (SPARKC-283)
- Upgrade Spark to 1.6.0 and add Apache Snapshot repository to resolvers (SPARKC-272, SPARKC-298, SPARKC-305)
- Includes all patches up to 1.5.0.
Release 1.5.0
1.5.0
- Fixed assembly build (SPARKC-311)
- Upgrade Cassandra version to 3.0.2 by default and allow to specify arbitrary Cassandra version for
integration tests through the command line (SPARKC-307) - Upgrade Cassandra driver to 3.0.0 GA
- Includes all patches up to 1.4.2.
1.4.2
- SqlRowWriter not using Cached Converters (SPARKC-329)
- Fix Violation of Partition Contract (SPARKC-323)
- Use ScalaReflectionLock from Spark instead of TypeTag.
to workaround Scala 2.10 reflection concurrency issues (SPARKC-333)
Release 1.4.2
1.4.2
- SqlRowWriter not using Cached Converters (SPARKC-329)
- Fix Violation of Partition Contract (SPARKC-323)
- Use ScalaReflectionLock from Spark instead of TypeTag.
to workaround Scala 2.10 reflection concurrency issues (SPARKC-333)
Release 1.5.0 Release Candidate 1
1.5.0 RC1
- Fix special case types in SqlRowWriter (SPARKC-306)
- Fix sbt assembly
- Create Cassandra Schema from DataFrame (SPARKC-231)
- JWCT inherits Spark Conf from Spark Context (SPARKC-294)
- Support of new Cassandra Date and Time types (SPARKC-277)
- Upgrade Cassandra driver to 3.0.0-rc1
Preview release 1.5.0-M3
1.5.0 M3.
- Added ColumRef child class to represent functions calls (SPARKC-280)
- Warn if Keep_alive_ms is less than spark batch size in streaming (SPARKC-228)
- Fixed real tests (SPARKC-247)
- Added support for tinyint and smallint types (SPARKC-269)
- Updated Java driver version to 3.0.0-alpha4; Codec API changes (SPARKC-285)
- Updated Java driver version to 3.0.0-alpha3 (SPARKC-270)
- Changed the way CassandraConnectorSource is obtained due to SPARK-7171 (SPARKC-268)
- Change write ConsistencyLevel to LOCAL_QUORUM (SPARKC-262)
- Parallelize integration tests (SPARKC-293)
- Includes all patches up to 1.4.1.
Release 1.4.1
1.4.1
- Let UDTs be converted from GenericRows (SPARKC-271)
- Map InetAddress and UUID to string and store it as StringType in Spark SQL (SPARKC-259)
- VarInt Column is converted to decimal stored in Spark SQL (SPARKC-266)
- Retrieve TableSize from Cassandra system table for datasource relation (SPARKC-164)
- Fix merge strategy for netty.io.properties (SPARKC-249)
- Upgrade integration tests to use Cassandra 2.1.9 and upgrade Java Driver.
to 2.1.7.1, Spark to 1.4.1 (SPARKC-248) - Make OptionConverter handle Nones as well as nulls (SPARKC-275)
Preview release 1.5.0-M2
1.5.0 M2
- Bump Java Driver to 2.2.0-rc3, Guava to 16.0.1 and test against Cassandra 2.2.1 (SPARKC-229)
- Includes all patches up to 1.4.0.
Release 1.3.1
1.3.1
- Remove wrapRDD from CassandraTableScanJavaRDD. Fixes exception occuring
when performing RDD operations on any CassandraTableScanJavaRDD (SPARKC-236) - Backport synchronization fixes from 1.4.0 (SPARKC-247)
Release 1.4.0
The first stable release of 1.4 branch.
1.4.0
- Fixed broken integration tests (SPARKC-247):
- Fixed Scala reflection race condition in TupleColumnMapper.
- Fixed dev/run-real-tests script.
- Fixed CheckpointStreamSpec test.
1.4.0 RC1
- Added TTL and WRITETIME documentation (SPARKC-244)
- Reduced the amount of unneccessary error logging in integration tests (SPARKC-223)
- Fixed Repartition and JWC and Streaming Checkpointing broken by serialization
errors related to passing RowWriteFactory / DefaultRowWriter (SPARKC-202) - Fixed exceptions occuring when performing RDD operations on any.
CassandraTableScanJavaRDD (SPARKC-236)
1.4.0 M3
- Fixed UDT column bug in SparkSQL (SPARKC-219)
- Includes all patches up to release 1.2.5 and 1.3.0
- Fixed connection caching, changed SSL EnabledAlgorithms to Set (SPARKC-227)
1.4.0 M2
- Includes some unreleased patches from 1.2.5
- Changed default query timeout from 12 seconds to 2 minutes (SPARKC-220)
- Add a configurable delay between subsequent query retries (SPARKC-221)
- spark.cassandra.output.throughput_mb_per_sec can now be set to a decimal (SPARKC-226)
- Includes unreleased patches from 1.3.0
- Remove white spaces in c* connection host string (fix by Noorul Islam K M)
- Includes all changes up to 1.3.0-RC1.
1.4.0 M1
- Upgrade Spark to 1.4.0 (SPARKC-192)