Apache Cassandra

Last updated

Apache Cassandra
Original author(s) Avinash Lakshman, Prashant Malik / Facebook
Developer(s) Apache Software Foundation
Initial releaseJuly 2008;16 years ago (2008-07)
Stable release
5.0.2 [1]   OOjs UI icon edit-ltr-progressive.svg / October 19, 2024;51 days ago (October 19, 2024)
Repository
Written in Java
Operating system Cross-platform
Available inEnglish
Type NoSQL Database, data store
License Apache License 2.0
Website cassandra.apache.org   OOjs UI icon edit-ltr-progressive.svg

Apache Cassandra is a free and open-source database management system designed to handle large volumes of data across multiple commodity servers. The system prioritizes availability and scalability over consistency, making it particularly suited for systems with high write throughput requirements due to its LSM tree indexing storage layer. [2] As a wide-column database, Cassandra supports flexible schemas and efficiently handles data models with numerous sparse columns. The system is optimized for applications with well-defined data access patterns that can be incorporated into the schema design. [2] Cassandra supports computer clusters which may span multiple data centers, [3] featuring asynchronous and masterless replication. It enables low-latency operations for all clients and incorporates Amazon's Dynamo distributed storage and replication techniques, combined with Google's Bigtable data storage engine model. [4]

Contents

History

Avinash Lakshman, a co-author of Amazon's Dynamo, and Prashant Malik developed Cassandra at Facebook to support the inbox search functionality. Facebook released Cassandra as open-source software on Google Code in July 2008. [5] In March 2009, it became an Apache Incubator project [6] and on February 17, 2010, it graduated to a top-level project. [7]

The developers at Facebook named their database after Cassandra, the mythological Trojan prophetess, referencing her curse of making prophecies that were never believed. [8]

Features and Limitations

Cassandra uses a distributed architecture where all nodes perform identical functions, eliminating single points of failure. The system employs configurable replication strategies to distribute data across clusters, providing redundancy and disaster recovery capabilities. It achieves linear scaling by increasing read and write throughput with each additional node while maintaining continuous service.

Cassandra is categorized as an AP (Availability and Partition Tolerance) system, emphasizing availability and partition tolerance over consistency. While it offers tunable consistency levels for both read and write operations, its architecture makes it less suitable for use cases requiring strict consistency guarantees. [2] Additionally, Cassandra's compatibility with Hadoop and related tools allows for integration with existing big data processing workflows. Eventual consistency is maintained using tombstones to manage reads, upserts, and deletes.

The system's query capabilities have notable limitations. Cassandra does not support advanced query patterns such as multi-table JOINs, ad hoc aggregations, or complex queries. [2] These limitations stem from its distributed architecture, which optimizes for scalability and availability rather than complex query operations.

Data model

As a wide-column store, Cassandra combines features of both key-value and tabular database systems. It implements a partitioned row store model with adjustable consistency levels. [9]


Data Model Comparison: Cassandra vs RDBMS
FeatureCassandraRDBMS
OrganizationKeyspace → Table → RowDatabase → Table → Row
Row StructureDynamic columnsFixed schema
Column DataName, type, value, timestampName, type, value
Schema ChangesRuntime modificationsUsually requires downtime
Data ModelDenormalizedNormalized with JOINs


The data model consists of several hierarchical components:


Keyspace

A keyspace in Cassandra is analogous to a database in relational systems. It contains multiple tables and manages configuration information, including replication strategy and user-defined types (UDTs). [2]

Tables

Tables (formerly called column families prior to CQL 3) are containers for rows of data. Each table has a name and configuration information for its stored data. Tables may be created, dropped, or altered at run-time without blocking updates and queries. [10]

Rows and Columns

Each row is identified by a primary key and contains columns. The first component of a table's primary key is the partition key; within a partition, rows are clustered by the remaining columns of the key. [11]

Columns contain data belonging to a row and consist of:

Unlike traditional RDBMS tables, rows within the same table can have varying columns, providing a flexible structure. This flexibility distinguishes Cassandra from relational databases, as not all columns need to be specified for each row. [2] Other columns may be indexed separately from the primary key. [12]

Storage Model

Cassandra uses a Log Structured Merge Tree (LSM tree) index to optimize write throughput, in contrast to the B-tree indexes used by most databases. [2]

Storage Model Comparison: Cassandra vs RDBMS
FeatureCassandraRDBMS
Index StructureLSM TreeB-Tree
Write ProcessAppend-only with MemtableIn-place updates
Storage ComponentsCommit Log, Memtable, SSTableData files, Transaction Log
Update StrategyNew entry for each changeModify existing data
Delete HandlingTombstone markersDirect removal
Read OptimizationSecondaryPrimary
Write OptimizationPrimarySecondary

The storage architecture consists of three main components [2] :


Core Components

Write and Read Processes

Write operations follow a two-stage process:

  1. The write is recorded in the commit log and added to the Memtable
  2. When the Memtable reaches size or time thresholds, it flushes to an SSTable

Read operations:

  1. Check Memtable for latest data
  2. Search SSTables from newest to oldest using bloom filters for efficiency

Data Management

Tombstones

Every operation (create/update/delete) generates a new entry, with deletes handled via "tombstones". While common in many databases, tombstones can cause performance degradation in delete-heavy workloads. [13]

Compaction

Compaction consolidates multiple SSTables to:

  • Reduce storage usage
  • Remove deleted row tombstones
  • Improve read performance

Cassandra Query Language

Cassandra Query Language (CQL) is the interface for accessing Cassandra, as an alternative to the traditional Structured Query Language (SQL). CQL adds an abstraction layer that hides implementation details of this structure and provides native syntaxes for collections and other common encodings. Language drivers are available for Java (JDBC), Python (DBAPI2), Node.JS (DataStax), Go (gocql), and C++. [14]

The key space in Cassandra is a namespace that defines data replication across nodes. Therefore, replication is defined at the key space level. Below is an example of key space creation, including a column family in CQL 3.0: [15]

CREATEKEYSPACEMyKeySpaceWITHREPLICATION={'class':'SimpleStrategy','replication_factor':3};USEMyKeySpace;CREATECOLUMNFAMILYMyColumns(idtext,lastNametext,firstNametext,PRIMARYKEY(id));INSERTINTOMyColumns(id,lastName,firstName)VALUES('1','Doe','John');SELECT*FROMMyColumns;

Which gives:

 id | lastName | firstName ----+----------+----------   1 | Doe      | John  (1 rows) 

Distributed Architecture

Gossip Protocol

Cassandra uses a peer-to-peer gossip protocol for cluster communication. Nodes routinely exchange information about cluster state, including:

The system uses vector clocks to track information currency and ignore outdated state data. [2]

Seed Nodes

The architecture designates certain nodes as "seed" nodes that:

This design eliminates single points of failure while maintaining cluster-wide consistency of operational knowledge. [2]


Fault Tolerance

Cassandra employs the Phi Accrual Failure Detector to manage node failures during cluster operation. [16] Through this system, each node independently assesses the availability of other nodes during gossip communication. When a node fails to respond, it is "convicted" and removed from write operations, though it can rejoin the cluster upon resuming heartbeat signals. [2]

To maintain data integrity during node outages, Cassandra uses a "hinted handoff" mechanism. When writing to an offline node, the coordinator node temporarily stores the write data as a "hint." Once the offline node returns to service, these hints are forwarded to restore data consistency. Notably, Cassandra only permanently removes nodes through explicit administrative decommissioning or rebuilding, preventing temporary communication failures or restarts from triggering unnecessary data rebalancing. [2]

Management and monitoring

Cassandra is a Java-based system that can be managed and monitored via Java Management Extensions (JMX). The JMX-compliant Nodetool utility, for instance, can be used to manage a Cassandra cluster. [17] Nodetool also offers a number of commands to return Cassandra metrics pertaining to disk usage, latency, compaction, garbage collection, and more. [18]

Since the release of Cassandra 2.0.2 in 2013, measures of several metrics are produced via the Dropwizard metrics framework, [19] and may be queried via JMX using tools such as JConsole or passed to external monitoring systems via Dropwizard-compatible reporter plugins. [20]

Releases

Releases after graduation include:

VersionOriginal release dateLatest versionRelease dateStatus [21]
Old version, no longer maintained: 0.62010-04-120.6.132011-04-18No longer maintained
Old version, no longer maintained: 0.72011-01-100.7.102011-10-31No longer maintained
Old version, no longer maintained: 0.82011-06-030.8.102012-02-13No longer maintained
Old version, no longer maintained: 1.02011-10-181.0.122012-10-04No longer maintained
Old version, no longer maintained: 1.12012-04-241.1.122013-05-27No longer maintained
Old version, no longer maintained: 1.22013-01-021.2.192014-09-18No longer maintained
Old version, no longer maintained: 2.02013-09-032.0.172015-09-21No longer maintained
Old version, no longer maintained: 2.12014-09-162.1.222020-08-31No longer maintained
Old version, no longer maintained: 2.22015-07-202.2.192020-11-04No longer maintained
Old version, no longer maintained: 3.02015-11-093.0.292023-05-15No longer maintained
Old version, no longer maintained: 3.112017-06-233.11.152023-05-05No longer maintained
Old version, yet still maintained: 4.02021-07-264.0.132023-05-20Maintained until 5.1.0 release
Old version, yet still maintained: 4.12022-06-174.1.62024-08-19Maintained until 5.2.0 release
Current stable version:5.02024-09-055.0.22024-10-19Latest release. Maintained until 5.3.0 release
Legend:
Old version, not maintained
Old version, still maintained
Latest version
Latest preview version
Future release

See also

Related Research Articles

A distributed data store is a computer network where information is stored on more than one node, often in a replicated fashion. It is usually specifically used to refer to either a distributed database where users store information on a number of nodes, or a computer network in which users store information on a number of peer network nodes.

MySQL Cluster, also known as MySQL Ndb Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability. MySQL Cluster is implemented through the NDB or NDBCLUSTER storage engine for MySQL.

Multi-master replication is a method of database replication which allows data to be stored by a group of computers, and updated by any member of the group. All members are responsive to client data queries. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group and resolving any conflicts that might arise between concurrent changes made by different members.

Bigtable is a fully managed wide-column and key-value NoSQL database service for large analytical and operational workloads as part of the Google Cloud portfolio.

<span class="mw-page-title-main">Apache CouchDB</span> Document-oriented NoSQL database

Apache CouchDB is an open-source document-oriented NoSQL database, implemented in Erlang.

HBase is an open-source non-relational distributed database modeled after Google's Bigtable and written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS or Alluxio, providing Bigtable-like capabilities for Hadoop. That is, it provides a fault-tolerant way of storing large quantities of sparse data.

Dynamo is a set of techniques that together can form a highly available key-value structured storage system or a distributed data store. It has properties of both databases and distributed hash tables (DHTs). It was created to help address some scalability issues that Amazon experienced during the holiday season of 2004. By 2007, it was used in Amazon Web Services, such as its Simple Storage Service (S3).

A database shard, or simply a shard, is a horizontal partition of data in a database or search engine. Each shard may be held on a separate database server instance, to spread load.

<span class="mw-page-title-main">Couchbase Server</span> Open-source NoSQL database

Couchbase Server, originally known as Membase, is a source-available, distributed multi-model NoSQL document-oriented database software package optimized for interactive applications. These applications may serve many concurrent users by creating, storing, retrieving, aggregating, manipulating and presenting data. In support of these kinds of application needs, Couchbase Server is designed to provide easy-to-scale key-value, or JSON document access, with low latency and high sustainability throughput. It is designed to be clustered from a single machine to very large-scale deployments spanning many machines.

<span class="mw-page-title-main">Apache Hive</span> Database engine

Apache Hive is a data warehouse software project. It is built on top of Apache Hadoop for providing data query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data.

Hector is a high-level client API for Apache Cassandra. Named after Hector, a warrior of Troy in Greek mythology, it is a substitute for the Cassandra Java Client, or Thrift, that is encapsulated by Hector. It also has Maven repository access.

<span class="mw-page-title-main">Standard column family</span>

The standard column family is a NoSQL object that contains columns of related data. It is a tuple (pair) that consists of a key–value pair, where the key is mapped to a value that is a set of columns. In analogy with relational databases, a standard column family is as a "table", each key–value pair being a "row". Each column is a tuple consisting of a column name, a value, and a timestamp. In a relational database table, this data would be grouped together within a table with other non-related data.

A tombstone is a deleted record in a replica of a distributed data store. The tombstone is necessary, as distributed data stores use eventual consistency, where only a subset of nodes where the data is stored must respond before an operation is considered to be successful.

Sherpa is a cloud storage platform developed by Yahoo!. It is a hosted, distributed, and geographically replicated key-value data store. The service is a NoSQL system that address the scalability, availability, and latency needs of the conglomerate's websites. Sherpa has abilities such as elastic growth, multi-tenancy, global footprint for local low-latency access, asynchronous replication, representational state transfer (REST) based web service APIs, novel per-record consistency knobs, high availability, compression, secondary indexes, and record-level replication.

<span class="mw-page-title-main">Amazon DynamoDB</span> NoSQL database service

Amazon DynamoDB is a managed NoSQL database service provided by Amazon Web Services (AWS). It supports key-value and document data structures and is designed to handle a wide range of applications requiring scalability and performance.

<span class="mw-page-title-main">SingleStore</span> Database management system

SingleStore is a proprietary, cloud-native database designed for data-intensive applications. A distributed, relational, SQL database management system (RDBMS) that features ANSI SQL support, it is known for speed in data ingest, transaction processing, and query processing.

<span class="mw-page-title-main">Oracle NoSQL Database</span> Distributed database

Oracle NoSQL Database is a NoSQL-type distributed key-value database from Oracle Corporation. It provides transactional semantics for data manipulation, horizontal scalability, and simple administration and monitoring.

A wide-column store is a column-oriented DBMS and therefore a special type of NoSQL database. It uses tables, rows, and columns, but unlike a relational database, the names and format of the columns can vary from row to row in the same table. A wide-column store can be interpreted as a two-dimensional key–value store. Google's Bigtable is one of the prototypical examples of a wide-column store.

Database scalability is the ability of a database to handle changing demands by adding/removing resources. Databases use a host of techniques to cope. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but you need to consider total cost of ownership not just the infra cost.

<span class="mw-page-title-main">YugabyteDB</span> Transactional distributed SQL database

YugabyteDB is a high-performance transactional distributed SQL database for cloud-native applications, developed by Yugabyte.

References

  1. "Release cassandra-5.0.2".
  2. 1 2 3 4 5 6 7 8 9 10 11 12 Carpenter, Jeff; Hewitt, Eben (2022). Cassandra: The Definitive Guide (3rd ed.). O'Reilly Media. ISBN   978-1-4920-9710-5.
  3. Casares, Joaquin (November 5, 2012). "Multi-datacenter Replication in Cassandra". DataStax. Retrieved July 25, 2013. Cassandra's innate datacenter concepts are important as they allow multiple workloads to be run across multiple datacenters...
  4. "Apache Cassandra Documentation Overview" . Retrieved January 21, 2021.
  5. Hamilton, James (July 12, 2008). "Facebook Releases Cassandra as Open Source" . Retrieved June 4, 2009.
  6. "Is this the new hotness now?". Mail-archive.com. March 2, 2009. Archived from the original on April 25, 2010. Retrieved March 29, 2010.
  7. "Cassandra is an Apache top level project". Mail-archive.com. February 18, 2010. Archived from the original on March 28, 2010. Retrieved March 29, 2010.
  8. "The meaning behind the name of Apache Cassandra". Archived from the original on November 1, 2016. Retrieved July 19, 2016. Apache Cassandra is named after the Greek mythological prophet Cassandra. [...] Because of her beauty Apollo granted her the ability of prophecy. [...] When Cassandra of Troy refused Apollo, he put a curse on her so that all of her and her descendants' predictions would not be believed. [...] Cassandra is the cursed Oracle[.]
  9. DataStax (January 15, 2013). "About data consistency". Archived from the original on July 26, 2013. Retrieved July 25, 2013.
  10. Ellis, Jonathan (March 2, 2012). "The Schema Management Renaissance in Cassandra 1.1". DataStax. Retrieved July 25, 2013.
  11. Ellis, Jonathan (February 15, 2012). "Schema in Cassandra 1.1". DataStax. Retrieved July 25, 2013.
  12. Ellis, Jonathan (December 3, 2010). "What's new in Cassandra 0.7: Secondary indexes". DataStax. Retrieved July 25, 2013.
  13. Rodriguez, Alain (July 27, 2016). "About Deletes and Tombstones in Cassandra".
  14. "DataStax C/C++ Driver for Apache Cassandra". DataStax. Retrieved December 15, 2014.
  15. "CQL". Archived from the original on January 13, 2016. Retrieved January 5, 2016.
  16. Hayashibara, Naohiro; Défago, Xavier; Yared, Rami; Katayama, Takuya (2004). "The Φ Accrual Failure Detector". IEEE Symposium on Reliable Distributed Systems. pp. 66–78. doi:10.1109/RELDIS.2004.1353004.
  17. "NodeTool". Cassandra Wiki. Archived from the original on January 13, 2016. Retrieved January 5, 2016.
  18. "How to monitor Cassandra performance metrics". Datadog. December 3, 2015. Retrieved January 5, 2016.
  19. "Metrics". Cassandra Wiki. Archived from the original on November 12, 2015. Retrieved January 5, 2016.
  20. "Monitoring". Cassandra Documentation. Retrieved February 1, 2018.
  21. "Cassandra Server Releases". cassandra.apache.org. Retrieved December 15, 2015.

Bibliography