Cassandra is an open source data storage system developed at Facebook for inbox search and designed for storing and managing large amounts of data across commodity servers. It can server as both
- Real time data store system for online applications
- Also as a read intensive database for business intelligence system
Cassandra was designed to handle big data workloads across multiple nodes without any single point of failure. The various factors responsible for using Cassandra are
- It is fault tolerant and consistent
- Gigabytes to petabytes scalabilities
- It is a column-oriented database
- No single point of failure
- No need for separate caching layer
- Flexible schema design
- It has flexible data storage, easy data distribution, and fast writes
- It supports ACID (Atomicity, Consistency, Isolation, and Durability)properties
- Multi-data center and cloud capable
- Data compression
In Cassandra, composite type allows to define key or a column name with a concatenation of data of different type. You can use two types of Composite Type
The main components of Cassandra Data Model are
- Column & Family
Column family in Cassandra is referred for a collection of Rows.
A cluster is a container for keyspaces. Cassandra database is segmented over several machines that operate together. The cluster is the outermost container which arranges the nodes in a ring format and assigns data to them. These nodes have a replica which takes charge in case of data handling failure.
The other components of Cassandra are
- Data Center
- Commit log
- Bloom Filter
In Cassandra, a keyspace is a namespace that determines data replication on nodes. A cluster consist of one keyspace per node.
Syntax for creating keyspace in Cassandra is
CREATE KEYSPACE <identifier> WITH <properties>
In Cassandra Column, basically there are three values
- Column Name
- Time Stamp
ALTER KEYSPACE can be used to change properties such as the number of replicas and the durable_write of a keyspace.
Cassandra-Cqlsh is a query language that enables users to communicate with its database. By using Cassandra cqlsh, you can do following things
- Define a schema
- Insert a data and
- Execute a query
There are various Cqlsh shell commands in Cassandra. Command “Capture”, captures the output of a command and adds it to a file while, command “Consistency” display the current consistency level or set a new consistency level.
While creating a table primary key is mandatory, it is made up of one or more columns of a table.
While adding a column you need to take care that the
- Column name is not conflicting with the existing column names
- Table is not defined with compact storage option
Cassandra CQL collections help you to store multiple values in a single variable. In Cassandra, you can use CQL collections in following ways
- List: It is used when the order of the data needs to be maintained, and a value is to be stored multiple times (holds the list of unique elements)
- SET: It is used for group of elements to store and returned in sorted orders (holds repeating elements)
- MAP: It is a data type used to store a key-value pair of elements
Cassandra writes data in three components
- Commitlog write
- Memtable write
- SStable write
Cassandra first writes data to a commit log and then to an in-memory table structure memtable and at last in SStable
SStable consist of mainly 2 files
- Index file ( Bloom filter & Key offset pairs)
- Data file (Actual column data)
A bloom filter is a space efficient data structure that is used to test whether an element is a member of a set. In other words, it is used to determine whether an SSTable has data for a particular row. In Cassandra it is used to save IO when performing a KEY LOOKUP.
- Cassandra concatenate changed data to commitlog
- Commitlog acts as a crash recovery log for data
- Until the changed data is concatenated to commitlog write operation will be never considered successful
Data will not be lost once commit log is flushed out to file.
SSTables are immutable and cannot remove a row from SSTables. When a row needs to be deleted, Cassandra assigns the column value with a special value called Tombstone. When the data is read, the Tombstone value is considered as deleted.
Tunable Consistency is a phenomenal characteristic that makes Cassandra a favored database choice of Developers, Analysts and Big data Architects. Consistency refers to the up-to-date and synchronized data rows on all their replicas. Cassandra’s Tunable Consistency allows users to select the consistency level best suited for their use cases. It supports two consistencies -Eventual and Consistency and Strong Consistency.
The former guarantees consistency when no new updates are made on a given data item, all accesses return the last updated value eventually. Systems with eventual consistency are known to have achieved replica convergence.
For Strong consistency, Cassandra supports the following condition:
R + W > N, where
N – Number of replicas
W – Number of nodes that need to agree for a successful write
R – Number of nodes that need to agree for a successful read
Cassandra performs the write function by applying two commits-first it writes to a commit log on disk and then commits to an in-memory structured known as memtable. Once the two commits are successful, the write is achieved. Writes are written in the table structure as SSTable (sorted string table). Cassandra offers speedier write performance.
DataStaxOpsCenter: internet-based management and monitoring solution for Cassandra cluster and DataStax. It is free to download and includes an additional Edition of OpsCenter
- SPM primarily administers Cassandra metrics and various OS and JVM metrics. Besides Cassandra, SPM also monitors Hadoop, Spark, Solr, Storm, zookeeper and other Big Data platforms. The main features of SPM include correlation of events and metrics, distributed transaction tracing, creating real-time graphs with zooming, anomaly detection and heartbeat alerting.
Similar to table, memtable is in-memory/write-back cache space consisting of content in key and column format. The data in memtable is sorted by key, and each ColumnFamily consist of a distinct memtable that retrieves column data via key. It stores the writes until it is full, and then flushed out.
SSTable expands to ‘Sorted String Table,’ which refers to an important data file in Cassandra and accepts regular written memtables. They are stored on disk and exist for each Cassandra table. Exhibiting immutability, SStables do not allow any further addition and removal of data items once written. For each SSTable, Cassandra creates three separate files like partition index, partition summary and a bloom filter.
Associated with SSTable, Bloom filter is an off-heap (off the Java heap to native memory) data structure to check whether there is any data available in the SSTable before performing any I/O disk operation.Learn more about Apache Cassandra- A Brief Intro in this insightful blog now!
With a strong requirement to scale systems when additional resources are needed, CAP Theorem plays a major role in maintaining the scaling strategy. It is an efficient way to handle scaling in distributed systems. Consistency Availability and Partition tolerance (CAP) theorem states that in distributed systems like Cassandra, users can enjoy only two out of these three characteristics.
One of them needs to be sacrificed. Consistency guarantees the return of most recent write for the client, Availability returns a rational response within minimum time and in Partition Tolerance, the system will continue its operations when network partitions occur. The two options available are AP and CP.
While a node is a single machine running Cassandra, cluster is a collection of nodes that have similar type of data grouped together. Data Centersare useful components when serving customers in different geographical areas. You can group different nodes of a cluster into different data centers.
Using CQL (Cassandra Query Language).Cqlsh is used for interacting with database.
Cassandra Data Model consists of four main components:
Cluster: Made up of multiple nodes and keyspaces
Keyspace: a namespace to group multiple column families, especially one per partition
Column: consists of a column name, value and timestamp
ColumnFamily: multiple columns with row key reference.
CQL is Cassandra Query language to access and query the Apache distributed database. It consists of a CQL parser that incites all the implementation details to the server. The syntax of CQL is similar to SQL but it does not alter the Cassandra data model.
Compaction refers to a maintenance process in Cassandra , in which, the SSTables are reorganized for data optimization of data structure son the disk. The compaction process is useful during interactive with memtable. There are two type sof compaction in Cassandra:
Minor compaction: started automatically when a new sstable is created. Here, Cassandra condenses all the equally sized sstables into one.
Major compaction is triggered manually using nodetool. Compacts all sstables of a ColumnFamily into one.
Unlike relational databases, Cassandra does not support ACID transactions.
Cqlsh expands to Cassandra Query language Shell that configures the CQL interactive terminal. It is a Python-base command-line prompt used on Linux or Windows and exequte CQL commands like ASSUME, CAPTURE, CONSITENCY, COPY, DESCRIBE and many others. With cqlsh, users can define a schema, insert data and execute a query.
Cassandra Super Column is a unique element consisting of similar collections of data. They are actually key-value pairs with values as columns. It is a sorted array of columns, and they follow a hierarchy when in action: keystore> column family> super column> column data structure in JSON.
Similar to row keys, super column data entries contains no independent values but are used to collect other columns. It is interesting to note that super column keys appearing in different rows do not necessarily match and will not ever.
Both elements work on the principle of tuple having name and value. However, the former‘s value is a string while the value in latter is a Map of Columns with different data types.
Unlike Columns, Super Columns do not contain the third component of timestamp.
As the name suggests, ColumnFamily refers to a structure having infinite number of rows. That are referred by a key-value pair, where key is the name of the column and value represents the column data. It is much similar to a hashmap in java or dictionary in Python. Rememeber, the rows are not limited to a predefined list of Columns here. Also, the Column Family is absolutely flexible with one row having 100 Columns while the other only 2 columns.
Source command is used to execute a file consisting of CQL statements.
Thrift is a legacy RPC protocol or API unified with a code generation tool for CQL. The purpose of using Thrift in Cassandra is to facilitate access to the DB across the programming language.
Tombstone is row marker indicating a column deletion. These marked columns are deleted during compaction. Tombstones are of great significance as Cassnadra supports eventual consistency, where the data must respond before any successful operation.
Since Cassandra Online Training is a Java application, it can successfully run on any Java-driven platform or Java Runtime Environment (JRE) or Java Virtual Machine (JVM). Cassandra also runs on RedHat, CentOS, Debian and Ubuntu Linux platforms.
The default settings state that Cassandra uses 7000 ports for Cluster Management, 9160 for Thrift Clients, 8080 for JMX. These are all TCP ports and can be edited in the configuration file: bin/Cassandra.in.sh
Yes, but keeping in mind the following processes.
- Do not forget to clear the commitlog with ‘nodetool drain’
- Turn off Cassandra to check that there is no data left in commitlog
- Delete the sstable files for the removed CFs
ReplicationFactor is the measure of number of data copies existing. It is important to increase the replication factor to log into the cluster.
Yes, but it will require running repair to alter the replica count of existing data.
NoSQL (sometimes expanded to “not only sql“) is a broad class of database management systems that differ from the classic model of the relational database management system (rdbms) in some significant ways.
- Specifically designed for high load
- Natively support horizontal scalability
- Fault tolerant
- Store data in denormalised manner
- Do not usually enforce strict database schema
- Do not usually store data in a table
- Sometimes provide eventual consistency instead of ACID transactions
In contrast to RDBMS, NoSQL systems:
- Do not guarantee data consistency
- Usually support a limited query language (subset of SQL or another custom query language)
- May not provide support for transactions/distributed transactions
- Do not usually use some advanced concepts of RDBMS, such as triggers, views, stored procedures
NoSQL implementations can be categorised by their manner of implementation:
- Document store
- Key-value store
- Multivalue databases
- Object databases
- Tuple store
Apache Cassandra is an open source, free to use, distributed, decentralized, elastically and linearly scalable, highly available, fault-tolerant, tune-ably consistent, column-oriented database that bases its distribution design on Amazon’s Dynamo and its data model on Google’s Bigtable. Created at Facebook, it is now used at some of the most popular sites on the Web. Cassandra lies in CA bucket of CAP Theorem.
Our use case was more of write intensive. Since Cassandra provide Consistency and Availability, which was requirement of our use case we preferred Cassandra.
HBase is really good for Low latency read write kind of use cases.
– The Cassandra data model has 4 main concepts which are cluster, keyspace, column,column&family.
– Clusters contain many nodes (machines) and can contain multiple keyspaces.
– A keyspace is a namespace to group multiple column families, typically one per application.
– A column contains a name, value and timestamp.
– A column family contains multiple columns referenced by a row keys.
Cassandra is an open source scalable and highly available “NoSQL” distributed database management system from Apache. Cassandra claims to offer fault tolerant linear scalability with no single point of failure. Cassandra sits in the Column-Family NoSQL camp.The Cassandra data model is designed for large scale distributed data and trades ACID compliant data practices for performance and availability.Cassandra is optimized for very fast and highly available writes.Cassandra is written in Java and can run on a vast array of operating systems and platform.
Cassandra is a Java Application, meaning that a compiled binary distribution of Cassandra can run on any platform that has a Java Runtime Environment (JRE), also referred to as a Java Virtual Machine (JVM). Datastax Strongly recommends using the Oracle Sun Java Runtime Environment (JRE), version 1.6.0_19 or later, for optimal performance. Packaged releases are provided for RedHat, CentOS , Debian and Ubuntu Linux Platforms.
Cassandra 0.8 is the first release to introduce Cassandra Query Language(CQL), the first standardized query language for Apache Cassandra. CQL pushes all of the implementation details to the server in the form of a CQL parser. Clients built on CQL only need to know how to interpret query result objects. CQL is the start of the first officially supported client API for Apache Cassandra. CQL drivers for the various languages are hosted with the Apache Cassandra project.
CQL Syntax is based on SQL (Structured Query Language), the standard for relational database manipulation. Although CQL has many similarities to SQL, it does not change the underlying Cassandra data model. There is no support for JOINS, for example.
Datastax supplies both a free and commercial version of OpsCenter, which is a visual, browser-based management toll for Cassandra. With OpsCenter, a user can visually carry out many administrative tasks, monitor a cluster for performance, and do much more. Downloads of OpsCenter are available on the DataStax Website.
A number of command line tools also ship with Cassandra for querying/writing to the database, performing administration functions, etc.
Cassandra also exposes a number of statistics and management operations via Java Management Extensions(JMX). Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring Java Applications and services. Any statistics or operation that a Java application has exposed as an MBean can then be monitored or manipulated using JMX.
During normal operation, Cassandra outputs information and statistics that you can monitor using JMX-compliant tools such as JConsole, the Cassandra nodetool utility, or the DataStax OpsCenter centralized management console. With the same tools, you can perform certain administrative commands and operation such as flushing caches or doing a repair.
The CAP theorem (also called as Brewer’s theorem after its author, Eric Brewer) states that within a large-scale distributed data system, there are three requirements that have a relationship of sliding dependency: Consistency, Availability, and Partition Tolerance.
CAP theorem states that in any given system, you can strongly support only two of these three.
Cassandra is distributed, which means that it is capable of running on multiple machines while appearing to users as a unified whole. Cassandra is decentralized means that there is no single point of failure. All of the nodes in a Cassandra cluster functions exactly the same. There is NO Master NO Slave.
Elastic Scalability means that your cluster can seamlessly scale up and scale back down. That actually means that adding more servers to cluster would improve and scale performance of cluster in linear fashion without any manual interventions. Vice versa is equally true.
Consistency essentially means that a read always returns the most recently written value. Cassandra allows you to easily decide the level of consistency you require, in balance with the level of availability. This is controlled by parameters like replication factor and consistency level.
Cassandra is highly available. You can easily remove few of Cassandra failed node from cluster without actually losing any data and without bring whole cluster down. In similar fashion you can also improve performance by replicating data to multiple data center.
A collection of related nodes is called so. A data center can be a physical data center or virtual data center. Replication is set by data center. Depending on the replication factor, data can be written to multiple data centers. However, data centers should never span physical locations whereas a cluster contains one or more data centers. It can span physical locations.
It is a crash-recovery mechanism. All data is written first to the commit log (file) for durability. After all its data has been flushed to SSTables, it can be archived, deleted, or recycled.
A sorted string table (SSTable) is an immutable data file to which Cassandra writes memtables periodically. SSTables are append only and stored on disk sequentially and maintained for each Cassandra table.
Whereas RDBMS Table collection of ordered columns fetched by row.
Gossip is a peer-to-peer communication protocol in which nodes periodically exchange state information about themselves and about other nodes they know about. The gossip process runs every second and exchanges state messages with up to three other nodes in the cluster.
This is a kind of Partitioner that stores rows by key order, aligning the physical structure of the data with your sort order. Configuring your column family to use order-preserving partitioning allows you to perform range slices, meaning that Cassandra knows which nodes have which keys. This partitioner is somewhat the opposite of the Random Partitioner; it has the advantage of allowing for efficient range queries, but the disadvantage of unevenly distributing keys.
The order-preserving partitioner (OPP) is implemented by the org.apache.cassandra .dht.OrderPreservingPartitionerclass. There is a special kind of OPP called the collating order-preserving partitioner (COPP). This acts like a regular OPP, but sorts the data in a collated manner according to English/US lexicography instead of byte ordering. For this reason, it is useful for locale-aware applications. The COPP is implemented by the org.apache.cassandra .dht.CollatingOrderPreservingParti tioner class. This is implemented in Cassandra by org.apache.cassandra.dht.OrderPreservingPartitioner.
In Cassandra logical division that associates similar data is called as column family. Basic Cassandra data structures: the column, which is a name/value pair (and a client-supplied timestamp of when it was last updated), and a column family, which is a container for rows that have similar, but not identical, column sets. We have a unique identifier for each row could be called a row key. A keyspace is the outermost container for data in Cassandra, corresponding closely to a relational database.
It is used to display a synopsis and a brief description of all cqlsh commands.
Capture command is used to captures the output of a command and adds it to a file.
Materialized” means storing a full copy of the original data so that everything you need to answer a query is right there, without forcing you to look up the original data. This is because you don’t have a SQL WHERE clause, you can recreate this effect by writing your data to a second column family that is created specifically to represent that query.
This is important because Cassandra use timestamps to determine the most recent write value.
Super columns suffer from a number of problems, not least of which is that it is necessary for Cassandra to deserialize all of the sub-columns of a super column when querying (even if the result will only return a small subset). As a result, there is a practical limit to the number of sub-columns per super column that can be stored before performance suffers.
In theory, this could be fixed within Cassandra by properly indexing sub-columns, but consensus is that composite columns are a better solution, and they work without the added complexity.
Querying becomes more flexible when you add secondary indexes to table columns. You can add indexed columns to the WHERE clause of a SELECT.
When to use secondary indexes: You want to query on a column that isn’t the primary key and isn’t part of a composite key. The column you want to be querying on has few unique values (what I mean by this is, say you have a column Town, that is a good choice for secondary indexing because lots of people will be form the same town, date of birth however will not be such a good choice).
When to avoid secondary indexes: Try not using secondary indexes on columns contain a high count of unique values and that will produce few results. Remember it makes writing to DB much slower, you can find value only by exact index and you need to make requests to all servers in cluster to find value by index.
We query Cassandra using cql (Cassandra query language). We use cqlsh for interacting with DB.
Yes Cassandra works pretty well on windows. Right now we have linux and windows compatible versions available.
This is because Cassandra does not support joins. User can join data at its own end.
Consistency command is used to copy data to and from Cassandra to a file.
Yes and No, depending on what you mean by ‘transactions’. Unlike relational databases, Cassandra does not offer fully ACID-compliant transactions. There are no locking or transactional dependencies when concurrently updating multiple rows or column families. But if by ‘transactions’ you mean real-time data entry and retrieval, with durability and tunable consistency, then yes.
Cassandra does not support transactions in the sense of bundling multiple row updates into one all-or-nothing operation. Nor Does it roll back when a write succeeds on one replica, but fails on other replicas. It is possible in Cassandra to have a write operation report a failure to the client, but still actually persist the write to a replica.
However, this does not mean that Cassandra cannot be used as an operational or real time data store. Data is very safe in Cassandra because writes in Cassandra are durable. All writes to a replica node are recorded both in memory and in a commit log before they are acknowledged as a success. If a crash or server failure occurs before the memory tables are flushed to disk, the commit log is replayed on restart to recover any lost writes.
The compaction process merges keys, combines columns, evicts tombstones, consolidates SSTables, and creates a new index in the merged SSTable.
Anti-entropy, or replica synchronization, is the mechanism in Cassandra for ensuring
that data on different nodes is updated to the newest version
Consistency means to synchronize and how up-to-date a row of Cassandra data is on all of its replicas.
In this write operations will be handled in the background, asynchronously. It is the fastest way to write data, and the one that is used to offer the least confidence that operations will succeed.
Ii assures that our write operation was successful on at least one node, even if the acknowledgment is only for a hint. It is a relatively weak level of consistency.
It is used to ensure that the write operation was written to at least one node, including its commit log and memtable.
A quorum is a number of nodes that is used to represent the consensus on an operation. It is determined by / 2 + 1.
SOURCE command is used to execute a file that contains CQL statements.
Every node as specified in your configuration entry must successfully acknowledge the write operation. If any nodes do not acknowledge the write operation, the write fails. This has the highest level of consistency and the lowest level of performance.
It is mechanism to ensure availability, fault tolerance, and graceful degradation. If a write operation occurs and a node that is intended to receive that write goes down, a note (the “hint”) is given (“handed off”) to a different live node to indicate that it should replay the write operation to the unavailable node when it comes back online. This does two things: it reduces the amount of time that it takes for a node to get all the data it missed once it comes back online, and it improves write performance in lower consistency levels.
Merkle tree is a binary tree data structure that summarizes in short form the data in a larger dataset. Merkle trees are used in Cassandra to ensure that the peer-to-peer network of nodes receives data blocks unaltered and unharmed.
It means a query by column name for a set of keys.
It means query to get a subset of columns for a set of keys.
A seed is a node that already exists in a Cassandra cluster and is used by newly added nodes to get up and running. The newly added node can start gossiping with the seed node to get state information and learn the topology of the node ring. There may be
many seeds in a cluster.
This is a type of read query. Use get_slice() to query by a single column name or a range of column names. Use get_range_slice() to return a subset of columns for a range of keys.
Cassandra does not immediately delete data following a delete operation. Instead, it marks the data with a “tombstone,” an indicator that the column has been deleted but not removed entirely yet. The tombstone can then be propagated to other replicas.
Like a batch update in the relational world, the batch_mutate operation allows grouping calls on many keys into a single call in order to save on the cost of network round trips. If batch_mutate fails in the middle of its list of mutations, there will be no rollback, so any updates that have already occurred up to this point will remain intact.
Hector is an open source project written in Java using the MIT license. It was one of the early Cassandra clients and is used in production at Outbrain. It wraps Thrift and offers JMX, connection pooling, and failover.
Kundera is an object-relational mapping (ORM) implementation for Cassandra written using Java annotations.
This is a kind of Partitioner that uses a BigIntegerToken with an MD5 hash to determine where to place the keys on the node ring. This has the advantage of spreading your keys evenly across your cluster, but the disadvantage of causing inefficient range queries. This is the default partitioner.
This is another mechanism to ensure consistency throughout the node ring. In a read operation, if Cassandra detects that some nodes have responded with data that is inconsistent with the response of other, newer nodes, it makes a note to perform a read repair on the old nodes. The read repair means that Cassandra will send a write request to the nodes with stale data to get them up to date with the newer data returned from the original read operation. It does this by pulling all the data from the node, performing a merge, and writing the merged data back to the nodes that were out of sync. The detection of inconsistent data is made by comparing timestamps and checksums.
A snitch is Cassandra’s way of mapping a node to a physical location in the network.
It helps determine the location of a node relative to another node in order to assist with discovery and ensure efficient request routing.