Skip to content

Commit

Permalink
Prepare release 1.1.0-alpha1
Browse files Browse the repository at this point in the history
  • Loading branch information
pkolaczk committed Sep 22, 2014
1 parent 26d9b62 commit 75beefd
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ execute arbitrary CQL queries in your Spark applications.
## Features

- Compatible with Apache Cassandra version 2.0 or higher and DataStax Enterprise 4.5
- Compatible with Apache Spark 0.9 and 1.0
- Exposes Cassandra tables as Spark RDDs
- Compatible with Apache Spark 1.0 and 1.1
- Exposes Cassandra tables as Spark RDDs
- Maps table rows to CassandraRow objects or tuples
- Offers customizable object mapper for mapping rows to objects of user-defined classes
- Saves RDDs back to Cassandra by implicit `saveToCassandra` call
Expand All @@ -25,11 +25,11 @@ This project has been published to the Maven Central Repository.
For SBT to download the connector binaries, sources and javadoc, put this in your project
SBT config:

libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.0.0" withSources() withJavadoc()
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.0-alpha1" withSources() withJavadoc()

If you want to access the functionality of Connector from Java, you may want to add also a Java API module:

libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector-java" % "1.0.0" withSources() withJavadoc()
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector-java" % "1.1.0-alpha1" withSources() withJavadoc()

## Building
In the project root directory run:
Expand Down

0 comments on commit 75beefd

Please sign in to comment.