Skip to content
This repository has been archived by the owner on Feb 2, 2022. It is now read-only.
/ airbyte Public archive
forked from airbytehq/airbyte

Latest commit

 

History

History
49 lines (34 loc) · 1.82 KB

cassandra.md

File metadata and controls

49 lines (34 loc) · 1.82 KB

Cassandra

Sync overview

Output schema

The incoming airbyte data is structured in keyspaces and tables and is partitioned and replicated across different nodes in the cluster. This connector maps an incoming stream to a Cassandra table and a namespace to a Cassandrakeyspace. Fields in the airbyte message become different columns in the Cassandra tables. Each table will contain the following columns.

  • _airbyte_ab_id: A random uuid generator to be used as a partition key.
  • _airbyte_emitted_at: a timestamp representing when the event was received from the data source.
  • _airbyte_data: a json text representing the extracted data.

Features

Feature Support Notes
Full Refresh Sync Warning: this mode deletes all previously synced data in the configured DynamoDB table.
Incremental - Append Sync
Incremental - Deduped History As this connector does not support dbt, we don't support this sync mode on this destination.
Namespaces Namespace will be used as part of the table name.

Performance considerations

Cassandra is designed to handle large amounts of data by using different nodes in the cluster in order to perform write operations. As long as you have enough nodes in the cluster the database can scale infinitely and handle any amount of data from the connector.

Getting started

Requirements

  • The driver is compatible with Cassandra >= 2.1
  • Configuration
    • Keyspace [default keyspace to use when writing data]
    • Username [authentication username]
    • Password [authentication password]
    • Address [cluster address]
    • Port [default: 9042]
    • Datacenter [optional] [default: datacenter1]
    • Replication [optional] [default: 1]

Setup guide

######TODO: more info, screenshots?, etc...