-
Notifications
You must be signed in to change notification settings - Fork 186
Release notes
The 1.8.0 release constitutes a major re-working of a wide number of internal phantom primitives, including but not limited to a brand new Scala flavoured QueryBuilder with full support for all CQL 3 features and even some of the more "esoteric" options available in CQL. We went above and beyond to try and offer a tool that's comprehensive and doesn't miss out on any feature of the protocol, no matter how small.
If you are wondering what happened to 1.7.0, it was never publicly released as testing the new querybuilder entailed serious internal efforts and for such a drastic change we wanted to do as much as possible to eliminate bugs. Surely there will be some still found, but hopefully very few and with your help they will be very short lived.
Ditching the Java Driver was not a question of code quality in the driver, but rather an opportunity to exploit the more advanced Scala type system features to introduce behaviour such as preventing duplicate limits on queries using phantom types, to prevent even more invalid queries from compiling, and to switch to a fully immutable QueryBuilder that's more in tone with idiomatic Scala, as opposed to the Java-esque mutable alternative already existing the java driver.
import com.websudos.phantom.Implicits._
has now been renamed to import com.websudos.phantom.dsl._
. The old import is still there but deprecated.
A natural question you may ask is why we resorted to seemingly unimportant changes, but the goal here was to enforce the new implicit mechanism and use a uniform importing experience across all modules.
So you can have the series of import com.websudos.phantom.dsl._, import com.websudos.phantom.thrift._, import com.websudos.phantom.testkit._
and so on, all identical, all using Scala package object
definitions as intended.
Until now, our implementation of Cassandra primitives has been based on the Datastax Java Driver and on an Option
based DSL. This made it hard to deal with parse errors at runtime, specifically in those situations when
the DSL was unable to parse the required type from the Cassandra result or in a simple case where null
was returned for a non-optional column.
The core of the Column[Table, Record, ValueType].apply(value: ValueType]
method which was used to parse rows in a type safe manner was written like this:
import com.datastax.driver.core.Row
def apply(row: Row): = optional(row).getOrElse(throw new Exception("Couldn't parse things")
This approach discarded the original exception which caused the parser to parse a null
and subsequently a None
was ignored.
With the new type-safe primitive interface that no longer relies on the Datastax Java driver we were also able to move the Option
based parsing mechanism to a Try
mechanism which will now
log all parse errors un-altered, in the exact same way as are thrown at compile time, using the logger
for the given table.
Internally, we are now using something like this:
def optional(r: Row): Try[T]
def apply(r: Row): T = optional(r) match {
case Success(value) => value
case Failure(ex) => {
table.logger.error(ex.getMessage)
throw ex
}
}
The exception is now logged and propagated as is. We intercept it to provide consistent logging in the same table logger where you would naturally monitor for logs.
Play enumerators and Twitter ResultSpools have been removed from the default one
, get
, fetch
and collect
methods. You will have to
explicitly call fetchEnumerator
and fetchSpool
if you want result throttling through async lazy iterators. This will offer everyone a signifact
performance improvement over query performance. Async iterators needed a lot of expensive "magic" to work properly, but you don't always need to fold over
100k records. That behaviour was implemented both as means of showing off as well as doing all in one loads like the Spark - Cassandra connector performs. E.g
dumping C* data into HDFS or whatever backup system. A big 60 - 70% gain should be expected.
Phantom connectors now require an implicit com.websudos.phantom.connectors.KeySpace
to be defined. Instead of using a plain string, you just have to
use KeySpace.apply
or simply: trait MyConnector extends Connector { implicit val keySpace = KeySpace("your_def") }
. This change allows us to
replace the existing connector model and vastly improve the number of concurrent cluster connections required to perform operations on various keyspaces.
Insteaed of the 1 per keyspace model, we can now successfully re-use the same session without evening needing to switch as phantom will use the full CQL
reference syntax, e.g SELECT FROM keyspace.table
instead of SELECY FROM table
.
A entirely new set of options have been enabled in the type safe DSLs. You can now alter tables, specify advanced compressor behaviour and so forth, all from within phantom and with the guarantee of auto-completion and type safety.
This was never possible before in phantom, and now from 1.7.0 onwards we feature full support for using ALTER queries.
To stay up-to-date with our latest releases and news, follow us on Twitter: @outworkers.
- Issues and questions
- Adopters
- Roadmap
- Changelog
- Tutorials
- Commercial support
- Using phantom in your project
- Primitive columns
- Optional primitive columns
- Collection columns
- Collection operators
- Automated schema generation
- Indexing columns
- Data modeling with phantom
- Querying with phantom
- Asynchronous iterators
- Batch statements
- Apache Thrift integration
- Apache ZooKeeper integration
- The phantom testkit