docs/using-scylla/integrations/integration-databricks.rst
ScyllaDB is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with ScyllaDB. See ScyllaDB Drivers <https://docs.scylladb.com/stable/drivers/index.html>_. Any application which uses a CQL driver will work with ScyllaDB, for example, Databricks Spark cluster.
Although your requirements may be different, this example uses the following resources:
Before you begin
Verify that you have installed ScyllaDB and know the ScyllaDB server IP address. Make sure you have a connection on port 9042:
.. code-block:: none
curl <scylla_IP>:9042
Procedure
Databricks runtime version:
.. code-block:: none
Runtime: 9.1 LTS (Scala 2.12, Spark 3.1.2)
Spark config:
.. code-block:: none
spark.sql.catalog.<your_catalog> com.datastax.spark.connector.datasource.CassandraCatalog spark.sql.catalog.<your_catalog>.spark.cassandra.connection.host <your_host> spark.cassandra.auth.username <your_username> spark.cassandra.auth.password <your_password>
.. code-block:: none
com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.1.0
Test case
.. code-block:: none
CREATE KEYSPACE databriks WITH replication = {'class': 'NetworkTopologyStrategy', 'replication_factor' : 3}; CREATE TABLE databriks.demo1 (pk text PRIMARY KEY, ck1 text, ck2 text); INSERT INTO databriks.demo1 (pk, ck1, ck2) VALUES ('pk', 'ck1', 'ck2');
.. code-block:: none
df = spark.read.cassandraFormat.table("<your_catalog>.databriks.demo1") display(df)