docs/sql-data-sources-jdbc.md
Spark SQL also includes a data source that can read data from other databases using JDBC. This functionality should be preferred over using JdbcRDD. This is because the results are returned as a DataFrame and they can easily be processed in Spark SQL or joined with other data sources. The JDBC data source is also easier to use from Java or Python as it does not require the user to provide a ClassTag. (Note that this is different than the Spark SQL JDBC server, which allows other applications to run queries using Spark SQL).
To get started you will need to include the JDBC driver for your particular database on the spark classpath. For example, to connect to postgres from the Spark Shell you would run the following command:
{% highlight bash %} ./bin/spark-shell --driver-class-path postgresql-9.4.1207.jar --jars postgresql-9.4.1207.jar {% endhighlight %}
Spark supports the following case-insensitive options for JDBC. The Data source options of JDBC can be set via:
.option/.options methods of
DataFrameReaderDataFrameWriterOPTIONS clause at CREATE TABLE USING DATA_SOURCEFor connection properties, users can specify the JDBC connection properties in the data source options. <code>user</code> and <code>password</code> are normally provided as connection properties for logging into the data sources.
<table> <thead><tr><th><b>Property Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead> <tr> <td><code>url</code></td> <td>(none)</td> <td> The JDBC URL of the form <code>jdbc:subprotocol:subname</code> to connect to. The source-specific connection properties may be specified in the URL. e.g., <code>jdbc:postgresql://localhost/test?user=fred&password=secret</code> </td> <td>read/write</td> </tr> <tr> <td><code>dbtable</code></td> <td>(none)</td> <td> The JDBC table that should be read from or written into. Note that when using it in the read path anything that is valid in a <code>FROM</code> clause of a SQL query can be used. For example, instead of a full table you could also use a subquery in parentheses. It is not allowed to specify <code>dbtable</code> and <code>query</code> options at the same time. </td> <td>read/write</td> </tr> <tr> <td><code>query</code></td> <td>(none)</td> <td> A query that will be used to read data into Spark. The specified query will be parenthesized and used as a subquery in the <code>FROM</code> clause. Spark will also assign an alias to the subquery clause. As an example, spark will issue a query of the following form to the JDBC Source. <code> SELECT <columns> FROM (<user_specified_query>) spark_gen_alias</code>
Below are a couple of restrictions while using this option.
<ol>
<li> It is not allowed to specify <code>dbtable</code> and <code>query</code> options at the same time. </li>
<li> It is not allowed to specify <code>query</code> and <code>partitionColumn</code> options at the same time. When specifying
<code>partitionColumn</code> option is required, the subquery can be specified using <code>dbtable</code> option instead and
partition columns can be qualified using the subquery alias provided as part of <code>dbtable</code>.
Example:
<code>
spark.read.format("jdbc")
.option("url", jdbcUrl)
.option("query", "select c1, c2 from t1")
.load()
</code></li>
</ol>
</td>
<td>read</td>
<code><prepareQuery> SELECT <columns> FROM (<user_specified_query>) spark_gen_alias</code>
Below are a couple of examples.
<ol>
<li> MSSQL Server does not accept <code>WITH</code> clauses in subqueries but it is possible to split such a query to <code>prepareQuery</code> and <code>query</code>:
<code>
spark.read.format("jdbc")
.option("url", jdbcUrl)
.option("prepareQuery", "WITH t AS (SELECT x, y FROM tbl)")
.option("query", "SELECT * FROM t WHERE x > 10")
.load()
</code></li>
<li> MSSQL Server does not accept temp table clauses in subqueries but it is possible to split such a query to <code>prepareQuery</code> and <code>query</code>:
<code>
spark.read.format("jdbc")
.option("url", jdbcUrl)
.option("prepareQuery", "(SELECT * INTO #TempTable FROM (SELECT * FROM tbl) t)")
.option("query", "SELECT * FROM #TempTable")
.load()
</code></li>
</ol>
</td>
<td>read/write</td>
Example:
<code>
spark.read.format("jdbc")
.option("url", jdbcUrl)
.option("dbtable", "(select c1, c2 from t1) as subq")
.option("partitionColumn", "c1")
.option("lowerBound", "1")
.option("upperBound", "100")
.option("numPartitions", "3")
.load()
</code>
</td>
<td>read</td>
Note that kerberos authentication with keytab is not always supported by the JDBC driver.
Before using <code>keytab</code> and <code>principal</code> configuration options, please make sure the following requirements are met:
There are built-in connection providers for the following databases:
If the requirements are not met, please consider using the <code>JdbcConnectionProvider</code> developer API to handle custom authentication.
<div class="codetabs"> <div data-lang="python" markdown="1"> {% include_example jdbc_dataset python/sql/datasource.py %} </div> <div data-lang="scala" markdown="1"> {% include_example jdbc_dataset scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %} </div> <div data-lang="java" markdown="1"> {% include_example jdbc_dataset java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %} </div> <div data-lang="r" markdown="1"> {% include_example jdbc_dataset r/RSparkSQLExample.R %} </div> <div data-lang="SQL" markdown="1">{% highlight sql %}
CREATE TEMPORARY VIEW jdbcTable USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:postgresql:dbserver", dbtable "schema.tablename", user 'username', password 'password' )
INSERT INTO TABLE jdbcTable SELECT * FROM resultTable {% endhighlight %}
</div> </div>The below table describes the data type conversions from MySQL data types to Spark SQL Data Types, when reading data from a MySQL table using the built-in jdbc data source with the MySQL Connector/J as the activated JDBC Driver. Note that, different JDBC drivers, such as Maria Connector/J, which are also available to connect MySQL, may have different mapping rules.
<table> <thead> <tr> <th><b>MySQL Data Type</b></th> <th><b>Spark SQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BIT(1)</td> <td>BooleanType</td> <td></td> </tr> <tr> <td>BIT( >1 )</td> <td>BinaryType</td> <td>(Default)</td> </tr> <tr> <td>BIT( >1 )</td> <td>LongType</td> <td>spark.sql.legacy.mysql.bitArrayMapping.enabled=true</td> </tr> <tr> <td>TINYINT(1)</td> <td>BooleanType</td> <td></td> </tr> <tr> <td>TINYINT(1)</td> <td>ByteType</td> <td>tinyInt1isBit=false</td> </tr> <tr> <td>BOOLEAN</td> <td>BooleanType</td> <td></td> </tr> <tr> <td>BOOLEAN</td> <td>ByteType</td> <td>tinyInt1isBit=false</td> </tr> <tr> <td>TINYINT( >1 )</td> <td>ByteType</td> <td></td> </tr> <tr> <td>TINYINT( any ) UNSIGNED</td> <td>ShortType</td> <td></td> </tr> <tr> <td>SMALLINT</td> <td>ShortType</td> <td></td> </tr> <tr> <td>SMALLINT UNSIGNED</td> <td>IntegerType</td> <td></td> </tr> <tr> <td>MEDIUMINT [UNSIGNED]</td> <td>IntegerType</td> <td></td> </tr> <tr> <td>INT</td> <td>IntegerType</td> <td></td> </tr> <tr> <td>INT UNSIGNED</td> <td>LongType</td> <td></td> </tr> <tr> <td>BIGINT</td> <td>LongType</td> <td></td> </tr> <tr> <td>BIGINT UNSIGNED</td> <td>DecimalType(20,0)</td> <td></td> </tr> <tr> <td>FLOAT</td> <td>FloatType</td> <td></td> </tr> <tr> <td>FLOAT UNSIGNED</td> <td>DoubleType</td> <td></td> </tr> <tr> <td>DOUBLE [UNSIGNED]</td> <td>DoubleType</td> <td></td> </tr> <tr> <td>DECIMAL(p,s) [UNSIGNED]</td> <td>DecimalType(min(38, p),(min(18,s)))</td> <td>The column type is bounded to DecimalType(38, 18), if 'p>38', the fraction part will be truncated if exceeded. And if any value of this column have an actual precision greater 38 will fail with NUMERIC_VALUE_OUT_OF_RANGE.WITHOUT_SUGGESTION error</td> </tr> <tr> <td>DATE</td> <td>DateType</td> <td></td> </tr> <tr> <td>DATETIME</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>DATETIME</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>TIMESTAMP</td> <td>TimestampType</td> <td></td> </tr> <tr> <td>TIME</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>TIME</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>YEAR</td> <td>DateType</td> <td>yearIsDateType=true</td> </tr> <tr> <td>YEAR</td> <td>IntegerType</td> <td>yearIsDateType=false</td> </tr> <tr> <td>CHAR(n)</td> <td>CharType(n)</td> <td></td> </tr> <tr> <td>VARCHAR(n)</td> <td>VarcharType(n)</td> <td></td> </tr> <tr> <td>BINARY(n)</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>VARBINARY(n)</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>CHAR(n) BINARY</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>VARCHAR(n) BINARY</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>BLOB</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>TINYBLOB</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>MEDIUMBLOB</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>LONGBLOB</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>TEXT</td> <td>StringType</td> <td></td> </tr> <tr> <td>TINYTEXT</td> <td>StringType</td> <td></td> </tr> <tr> <td>MEDIUMTEXT</td> <td>StringType</td> <td></td> </tr> <tr> <td>LONGTEXT</td> <td>StringType</td> <td></td> </tr> <tr> <td>JSON</td> <td>StringType</td> <td></td> </tr> <tr> <td>GEOMETRY</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>ENUM</td> <td>CharType(n)</td> <td></td> </tr> <tr> <td>SET</td> <td>CharType(n)</td> <td></td> </tr> </tbody> </table>The below table describes the data type conversions from Spark SQL Data Types to MySQL data types, when creating, altering, or writing data to a MySQL table using the built-in jdbc data source with the MySQL Connector/J as the activated JDBC Driver.
Note that, different JDBC drivers, such as Maria Connector/J, which are also available to connect MySQL, may have different mapping rules.
<table> <thead> <tr> <th><b>Spark SQL Data Type</b></th> <th><b>MySQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BooleanType</td> <td>BIT(1)</td> <td></td> </tr> <tr> <td>ByteType</td> <td>TINYINT</td> <td></td> </tr> <tr> <td>ShortType</td> <td>SMALLINT</td> <td>For Spark 3.5 and previous, it maps to INTEGER</td> </tr> <tr> <td>IntegerType</td> <td>INTEGER</td> <td></td> </tr> <tr> <td>LongType</td> <td>BIGINT</td> <td></td> </tr> <tr> <td>FloatType</td> <td>FLOAT</td> <td></td> </tr> <tr> <td>DoubleType</td> <td>DOUBLE PRECISION</td> <td></td> </tr> <tr> <td>DecimalType(p, s)</td> <td>DECIMAL(p,s)</td> <td></td> </tr> <tr> <td>DateType</td> <td>DATE</td> <td></td> </tr> <tr> <td>TimestampType</td> <td>TIMESTAMP</td> <td></td> </tr> <tr> <td>TimestampNTZType</td> <td>DATETIME</td> <td></td> </tr> <tr> <td>StringType</td> <td>LONGTEXT</td> <td></td> </tr> <tr> <td>BinaryType</td> <td>BLOB</td> <td></td> </tr> <tr> <td>CharType(n)</td> <td>CHAR(n)</td> <td></td> </tr> <tr> <td>VarcharType(n)</td> <td>VARCHAR(n)</td> <td></td> </tr> </tbody> </table>The Spark Catalyst data types below are not supported with suitable MYSQL types.
The below table describes the data type conversions from PostgreSQL data types to Spark SQL Data Types, when reading data from a Postgres table using the built-in jdbc data source with the PostgreSQL JDBC Driver as the activated JDBC Driver. Note that, different JDBC drivers, or different versions might result slightly different.
<table> <thead> <tr> <th><b>PostgreSQL Data Type</b></th> <th><b>Spark SQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>boolean</td> <td>BooleanType</td> <td></td> </tr> <tr> <td>smallint, smallserial</td> <td>ShortType</td> <td></td> </tr> <tr> <td>integer, serial</td> <td>IntegerType</td> <td></td> </tr> <tr> <td>bigint, bigserial</td> <td>LongType</td> <td></td> </tr> <tr> <td>float, float(p), real</td> <td>FloatType</td> <td>1 ≤ p ≤ 24</td> </tr> <tr> <td>float(p)</td> <td>DoubleType</td> <td>25 ≤ p ≤ 53</td> </tr> <tr> <td>double precision</td> <td>DoubleType</td> <td></td> </tr> <tr> <td>numeric, decimal</td> <td>DecimalType</td> <td><ul><li>Since PostgreSQL 15, 's' can be negative. If 's<0' it'll be adjusted to DecimalType(min(p-s, 38), 0); Otherwise, DecimalType(p, s)</li><li>If 'p>38', the fraction part will be truncated if exceeded. And if any value of this column have an actual precision greater 38 will fail with NUMERIC_VALUE_OUT_OF_RANGE.WITHOUT_SUGGESTION error.</li><li>Special numeric values, 'NaN', 'infinity' and '-infinity' is not supported</li></ul></td> </tr> <tr> <td>character varying(n), varchar(n)</td> <td>VarcharType(n)</td> <td></td> </tr> <tr> <td>character(n), char(n), bpchar(n)</td> <td>CharType(n)</td> <td></td> </tr> <tr> <td>bpchar</td> <td>StringType</td> <td></td> </tr> <tr> <td>text</td> <td>StringType</td> <td></td> </tr> <tr> <td>bytea</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>date</td> <td>DateType</td> <td></td> </tr> <tr> <td>timestamp [ (p) ] [ without time zone ]</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>timestamp [ (p) ] [ without time zone ]</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>timestamp [ (p) ] with time zone</td> <td>TimestampType</td> <td></td> </tr> <tr> <td>time [ (p) ] [ without time zone ]</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>time [ (p) ] [ without time zone ]</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>time [ (p) ] with time zone</td> <td>TimestampType</td> <td></td> </tr> <tr> <td>interval [ fields ] [ (p) ]</td> <td>StringType</td> <td></td> </tr> <tr> <td>ENUM</td> <td>StringType</td> <td></td> </tr> <tr> <td>money</td> <td>StringType</td> <td>Monetary Types</td> </tr> <tr> <td>inet, cidr, macaddr, macaddr8</td> <td>StringType</td> <td>Network Address Types</td> </tr> <tr> <td>point, line, lseg, box, path, polygon, circle</td> <td>StringType</td> <td>Geometric Types</td> </tr> <tr> <td>pg_lsn</td> <td>StringType</td> <td>Log Sequence Number</td> </tr> <tr> <td>bit, bit(1)</td> <td>BooleanType</td> <td></td> </tr> <tr> <td>bit( >1 )</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>bit varying( any )</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>tsvector, tsquery</td> <td>StringType</td> <td>Text Search Types</td> </tr> <tr> <td>uuid</td> <td>StringType</td> <td>Universally Unique Identifier Type</td> </tr> <tr> <td>xml</td> <td>StringType</td> <td>XML Type</td> </tr> <tr> <td>json, jsonb</td> <td>StringType</td> <td>JSON Types</td> </tr> <tr> <td>array</td> <td>ArrayType</td> <td></td> </tr> <tr> <td>Composite Types</td> <td>StringType</td> <td>Types created by CREATE TYPE syntax.</td> </tr> <tr> <td>int4range, int8range, numrange, tsrange, tstzrange, daterange, etc</td> <td>StringType</td> <td>Range Types</td> </tr> <tr> <td>Domain Types</td> <td>(Decided by the underlying type)</td> <td></td> </tr> <tr> <td>oid</td> <td>DecimalType(20, 0)</td> <td>Object Identifier Types</td> </tr> <tr> <td>regxxx</td> <td>StringType</td> <td>Object Identifier Types</td> </tr> <tr> <td>void</td> <td>NullType</td> <td>void is a Postgres pseudo type, other pseudo types have not yet been verified</td> </tr> </tbody> </table>The below table describes the data type conversions from Spark SQL Data Types to PostgreSQL data types, when creating, altering, or writing data to a PostgreSQL table using the built-in jdbc data source with the PostgreSQL JDBC Driver as the activated JDBC Driver.
<table> <thead> <tr> <th><b>Spark SQL Data Type</b></th> <th><b>PostgreSQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BooleanType</td> <td>boolean</td> <td></td> </tr> <tr> <td>ByteType</td> <td>smallint</td> <td></td> </tr> <tr> <td>ShortType</td> <td>smallint</td> <td></td> </tr> <tr> <td>IntegerType</td> <td>integer</td> <td></td> </tr> <tr> <td>LongType</td> <td>bigint</td> <td></td> </tr> <tr> <td>FloatType</td> <td>float4</td> <td></td> </tr> <tr> <td>DoubleType</td> <td>float8</td> <td></td> </tr> <tr> <td>DecimalType(p, s)</td> <td>numeric(p,s)</td> <td></td> </tr> <tr> <td>DateType</td> <td>date</td> <td></td> </tr> <tr> <td>TimestampType</td> <td>timestamp with time zone</td> <td>Before Spark 4.0, it was mapped as timestamp. Please refer to the migration guide for more information</td> </tr> <tr> <td>TimestampNTZType</td> <td>timestamp</td> <td></td> </tr> <tr> <td>StringType</td> <td>text</td> <td></td> </tr> <tr> <td>BinaryType</td> <td>bytea</td> <td></td> </tr> <tr> <td>CharType(n)</td> <td>CHAR(n)</td> <td></td> </tr> <tr> <td>VarcharType(n)</td> <td>VARCHAR(n)</td> <td></td> </tr> <tr> <td>ArrayType</td> <td><table> <thead> <tr> <th><b>Element type</b></th> <th><b>PG Array</b></th> </tr> </thead> <tbody> <tr> <td>BooleanType</td> <td>boolean[]</td> </tr> <tr> <td>ByteType</td> <td>smallint[]</td> </tr> <tr> <td>ShortType</td> <td>smallint[]</td> </tr> <tr> <td>IntegerType</td> <td>integer[]</td> </tr> <tr> <td>LongType</td> <td>bigint[]</td> </tr> <tr> <td>FloatType</td> <td>float4[]</td> </tr> <tr> <td>DoubleType</td> <td>float8[]</td> </tr> <tr> <td>DecimalType(p, s)</td> <td>numeric(p,s)[]</td> </tr> <tr> <td>DateType</td> <td>date[]</td> </tr> <tr> <td>TimestampType</td> <td>timestamp[]</td> </tr> <tr> <td>TimestampNTZType</td> <td>timestamp[]</td> </tr> <tr> <td>StringType</td> <td>text[]</td> </tr> <tr> <td>BinaryType</td> <td>bytea[]</td> </tr> <tr> <td>CharType(n)</td> <td>char(n)[]</td> </tr> <tr> <td>VarcharType(n)</td> <td>varchar(n)[]</td> </tr> </tbody> </table></td> <td>If the element type is an ArrayType, it converts to Postgres multidimensional array. For instance, <code>ArrayType(ArrayType(StringType))</code> converts to <code>text[][]</code>, <code>ArrayType(ArrayType(ArrayType(LongType)))</code> converts to <code>bigint[][][]</code></td> </tr> </tbody> </table>The Spark Catalyst data types below are not supported with suitable PostgreSQL types.
The below table describes the data type conversions from Oracle data types to Spark SQL Data Types, when reading data from an Oracle table using the built-in jdbc data source with the Oracle JDBC as the activated JDBC Driver.
<table> <thead> <tr> <th><b>Oracle Data Type</b></th> <th><b>Spark SQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BOOLEAN</td> <td>BooleanType</td> <td>Introduced since Oracle Release 23c</td> </tr> <tr> <td>NUMBER[(p[,s])]</td> <td>DecimalType(p,s)</td> <td>'s' can be negative in Oracle. If 's<0' it'll be adjusted to DecimalType(min(p-s, 38), 0); Otherwise, DecimalType(p, s), and if 'p>38', the fraction part will be truncated if exceeded. And if any value of this column have an actual precision greater 38 will fail with NUMERIC_VALUE_OUT_OF_RANGE.WITHOUT_SUGGESTION error</td> </tr> <tr> <td>FLOAT[(p)]</td> <td>DecimalType(38, 10)</td> <td></td> </tr> <tr> <td>BINARY_FLOAT</td> <td>FloatType</td> <td></td> </tr> <tr> <td>BINARY_DOUBLE</td> <td>DoubleType</td> <td></td> </tr> <tr> <td>LONG</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>RAW(size)</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>LONG RAW</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>DATE</td> <td>TimestampType</td> <td>When oracle.jdbc.mapDateToTimestamp=true, it follows TIMESTAMP's behavior below</td> </tr> <tr> <td>DATE</td> <td>DateType</td> <td>When oracle.jdbc.mapDateToTimestamp=false, it maps to DateType</td> </tr> <tr> <td>TIMESTAMP</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>TIMESTAMP</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>TIMESTAMP WITH TIME ZONE</td> <td>TimestampType</td> <td></td> </tr> <tr> <td>TIMESTAMP WITH LOCAL TIME ZONE</td> <td>TimestampType</td> <td></td> </tr> <tr> <td>INTERVAL YEAR TO MONTH</td> <td>YearMonthIntervalType</td> <td></td> </tr> <tr> <td>INTERVAL DAY TO SECOND</td> <td>DayTimeIntervalType</td> <td></td> </tr> <tr> <td>CHAR[(size [BYTE | CHAR])]</td> <td>CharType(size)</td> <td></td> </tr> <tr> <td>NCHAR[(size)]</td> <td>StringType</td> <td></td> </tr> <tr> <td>VARCHAR2(size [BYTE | CHAR])</td> <td>VarcharType(size)</td> <td></td> </tr> <tr> <td>NVARCHAR2</td> <td>StringType</td> <td></td> </tr> <tr> <td>ROWID/UROWID</td> <td>StringType</td> <td></td> </tr> <tr> <td>CLOB</td> <td>StringType</td> <td></td> </tr> <tr> <td>NCLOB</td> <td>StringType</td> <td></td> </tr> <tr> <td>BLOB</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>BFILE</td> <td></td> <td>UNRECOGNIZED_SQL_TYPE error raised</td> </tr> </tbody> </table>The below table describes the data type conversions from Spark SQL Data Types to Oracle data types, when creating, altering, or writing data to an Oracle table using the built-in jdbc data source with the Oracle JDBC as the activated JDBC Driver.
<table> <thead> <tr> <th><b>Spark SQL Data Type</b></th> <th><b>Oracle Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BooleanType</td> <td>NUMBER(1, 0)</td> <td>BooleanType maps to NUMBER(1, 0) as BOOLEAN is introduced since Oracle Release 23c</td> </tr> <tr> <td>ByteType</td> <td>NUMBER(3)</td> <td></td> </tr> <tr> <td>ShortType</td> <td>NUMBER(5)</td> <td></td> </tr> <tr> <td>IntegerType</td> <td>NUMBER(10)</td> <td></td> </tr> <tr> <td>LongType</td> <td>NUMBER(19)</td> <td></td> </tr> <tr> <td>FloatType</td> <td>NUMBER(19, 4)</td> <td></td> </tr> <tr> <td>DoubleType</td> <td>NUMBER(19, 4)</td> <td></td> </tr> <tr> <td>DecimalType(p, s)</td> <td>NUMBER(p,s)</td> <td></td> </tr> <tr> <td>DateType</td> <td>DATE</td> <td></td> </tr> <tr> <td>TimestampType</td> <td>TIMESTAMP WITH LOCAL TIME ZONE</td> <td></td> </tr> <tr> <td>TimestampNTZType</td> <td>TIMESTAMP</td> <td></td> </tr> <tr> <td>StringType</td> <td>VARCHAR2(255)</td> <td>For historical reason, a string value has maximum 255 characters</td> </tr> <tr> <td>BinaryType</td> <td>BLOB</td> <td></td> </tr> <tr> <td>CharType(n)</td> <td>CHAR(n)</td> <td></td> </tr> <tr> <td>VarcharType(n)</td> <td>VARCHAR2(n)</td> <td></td> </tr> </tbody> </table>The Spark Catalyst data types below are not supported with suitable Oracle types.
The below table describes the data type conversions from Microsoft SQL Server data types to Spark SQL Data Types, when reading data from a Microsoft SQL Server table using the built-in jdbc data source with the mssql-jdbc as the activated JDBC Driver.
<table> <thead> <tr> <th><b>SQL Server Data Type</b></th> <th><b>Spark SQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>bit</td> <td>BooleanType</td> <td></td> </tr> <tr> <td>tinyint</td> <td>ShortType</td> <td></td> </tr> <tr> <td>smallint</td> <td>ShortType</td> <td></td> </tr> <tr> <td>int</td> <td>IntegerType</td> <td></td> </tr> <tr> <td>bigint</td> <td>LongType</td> <td></td> </tr> <tr> <td>float(p), real</td> <td>FloatType</td> <td>1 ≤ p ≤ 24</td> </tr> <tr> <td>float[(p)]</td> <td>DoubleType</td> <td>25 ≤ p ≤ 53</td> </tr> <tr> <td>double precision</td> <td>DoubleType</td> <td></td> </tr> <tr> <td>smallmoney</td> <td>DecimalType(10, 4)</td> <td></td> </tr> <tr> <td>money</td> <td>DecimalType(19, 4)</td> <td></td> </tr> <tr> <td>decimal[(p[, s])], numeric[(p[, s])]</td> <td>DecimalType(p, s)</td> <td></td> </tr> <tr> <td>date</td> <td>DateType</td> <td></td> </tr> <tr> <td>datetime</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>datetime</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>datetime2 [ (fractional seconds precision) ]</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>datetime2 [ (fractional seconds precision) ]</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>datetimeoffset [ (fractional seconds precision) ]</td> <td>TimestampType</td> <td></td> </tr> <tr> <td>smalldatetime</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>smalldatetime</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>time [ (fractional second scale) ]</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>time [ (fractional second scale) ]</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>binary [ ( n ) ]</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>varbinary [ ( n | max ) ]</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>char [ ( n ) ]</td> <td>CharType(n)</td> <td></td> </tr> <tr> <td>varchar [ ( n | max ) ]</td> <td>VarcharType(n)</td> <td></td> </tr> <tr> <td>nchar [ ( n ) ]</td> <td>StringType</td> <td></td> </tr> <tr> <td>nvarchar [ ( n | max ) ]</td> <td>StringType</td> <td></td> </tr> <tr> <td>text</td> <td>StringType</td> <td></td> </tr> <tr> <td>ntext</td> <td>StringType</td> <td></td> </tr> <tr> <td>image</td> <td>StringType</td> <td></td> </tr> <tr> <td>geography</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>geometry</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>rowversion</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>sql_variant</td> <td></td> <td>UNRECOGNIZED_SQL_TYPE error raised</td> </tr> </tbody> </table>The below table describes the data type conversions from Spark SQL Data Types to Microsoft SQL Server data types, when creating, altering, or writing data to a Microsoft SQL Server table using the built-in jdbc data source with the mssql-jdbc as the activated JDBC Driver.
<table> <thead> <tr> <th><b>Spark SQL Data Type</b></th> <th><b>SQL Server Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BooleanType</td> <td>bit</td> <td></td> </tr> <tr> <td>ByteType</td> <td>smallint</td> <td>Supported since Spark 4.0.0, previous versions throw errors</td> </tr> <tr> <td>ShortType</td> <td>smallint</td> <td></td> </tr> <tr> <td>IntegerType</td> <td>int</td> <td></td> </tr> <tr> <td>LongType</td> <td>bigint</td> <td></td> </tr> <tr> <td>FloatType</td> <td>real</td> <td></td> </tr> <tr> <td>DoubleType</td> <td>double precision</td> <td></td> </tr> <tr> <td>DecimalType(p, s)</td> <td>number(p,s)</td> <td></td> </tr> <tr> <td>DateType</td> <td>date</td> <td></td> </tr> <tr> <td>TimestampType</td> <td>datetime</td> <td></td> </tr> <tr> <td>TimestampNTZType</td> <td>datetime</td> <td></td> </tr> <tr> <td>StringType</td> <td>nvarchar(max)</td> <td></td> </tr> <tr> <td>BinaryType</td> <td>varbinary(max)</td> <td></td> </tr> <tr> <td>CharType(n)</td> <td>char(n)</td> <td></td> </tr> <tr> <td>VarcharType(n)</td> <td>varchar(n)</td> <td></td> </tr> </tbody> </table>The Spark Catalyst data types below are not supported with suitable SQL Server types.
The below table describes the data type conversions from DB2 data types to Spark SQL Data Types, when reading data from a DB2 table using the built-in jdbc data source with the IBM Data Server Driver For JDBC and SQLJ as the activated JDBC Driver.
<table> <thead> <tr> <th><b>DB2 Data Type</b></th> <th><b>Spark SQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BOOLEAN</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>SMALLINT</td> <td>ShortType</td> <td></td> </tr> <tr> <td>INTEGER</td> <td>IntegerType</td> <td></td> </tr> <tr> <td>BIGINT</td> <td>LongType</td> <td></td> </tr> <tr> <td>REAL</td> <td>FloatType</td> <td></td> </tr> <tr> <td>DOUBLE, FLOAT</td> <td>DoubleType</td> <td>FLOAT is double precision floating-point in db2</td> </tr> <tr> <td>DECIMAL, NUMERIC, DECFLOAT</td> <td>DecimalType</td> <td></td> </tr> <tr> <td>DATE</td> <td>DateType</td> <td></td> </tr> <tr> <td>TIMESTAMP, TIMESTAMP WITHOUT TIME ZONE</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>TIMESTAMP, TIMESTAMP WITHOUT TIME ZONE</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>TIMESTAMP WITH TIME ZONE</td> <td>TimestampType</td> <td></td> </tr> <tr> <td>TIME</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>TIME</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>CHAR(n)</td> <td>CharType(n)</td> <td></td> </tr> <tr> <td>VARCHAR(n)</td> <td>VarcharType(n)</td> <td></td> </tr> <tr> <td>CHAR(n) FOR BIT DATA</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>VARCHAR(n) FOR BIT DATA</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>BINARY(n)</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>VARBINARY(n)</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>CLOB(n)</td> <td>StringType</td> <td></td> </tr> <tr> <td>DBCLOB(n)</td> <td>StringType</td> <td></td> </tr> <tr> <td>BLOB(n)</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>GRAPHIC(n)</td> <td>StringType</td> <td></td> </tr> <tr> <td>VARGRAPHIC(n)</td> <td>StringType</td> <td></td> </tr> <tr> <td>XML</td> <td>StringType</td> <td></td> </tr> <tr> <td>ROWID</td> <td>StringType</td> <td></td> </tr> </tbody> </table>The below table describes the data type conversions from Spark SQL Data Types to DB2 data types, when creating, altering, or writing data to a DB2 table using the built-in jdbc data source with the IBM Data Server Driver For JDBC and SQLJ as the activated JDBC Driver.
<table> <thead> <tr> <th><b>Spark SQL Data Type</b></th> <th><b>DB2 Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BooleanType</td> <td>BOOLEAN</td> <td></td> </tr> <tr> <td>ByteType</td> <td>SMALLINT</td> <td></td> </tr> <tr> <td>ShortType</td> <td>SMALLINT</td> <td></td> </tr> <tr> <td>IntegerType</td> <td>INTEGER</td> <td></td> </tr> <tr> <td>LongType</td> <td>BIGINT</td> <td></td> </tr> <tr> <td>FloatType</td> <td>REAL</td> <td></td> </tr> <tr> <td>DoubleType</td> <td>DOUBLE PRECISION</td> <td></td> </tr> <tr> <td>DecimalType(p, s)</td> <td>DECIMAL(p,s)</td> <td>The maximum value for 'p' is 31 in DB2, while it is 38 in Spark. It might fail when storing DecimalType(p>=32, s) to DB2</td> </tr> <tr> <td>DateType</td> <td>DATE</td> <td></td> </tr> <tr> <td>TimestampType</td> <td>TIMESTAMP</td> <td></td> </tr> <tr> <td>TimestampNTZType</td> <td>TIMESTAMP</td> <td></td> </tr> <tr> <td>StringType</td> <td>CLOB</td> <td></td> </tr> <tr> <td>BinaryType</td> <td>BLOB</td> <td></td> </tr> <tr> <td>CharType(n)</td> <td>CHAR(n)</td> <td>The maximum value for 'n' is 255 in DB2, while it is unlimited in Spark.</td> </tr> <tr> <td>VarcharType(n)</td> <td>VARCHAR(n)</td> <td>The maximum value for 'n' is 255 in DB2, while it is unlimited in Spark.</td> </tr> </tbody> </table>The Spark Catalyst data types below are not supported with suitable DB2 types.
The below table describes the data type conversions from Teradata data types to Spark SQL Data Types, when reading data from a Teradata table using the built-in jdbc data source with the Teradata JDBC Driver as the activated JDBC Driver.
<table> <thead> <tr> <th><b>Teradata Data Type</b></th> <th><b>Spark SQL Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BYTEINT</td> <td>ByteType</td> <td></td> </tr> <tr> <td>SMALLINT</td> <td>ShortType</td> <td></td> </tr> <tr> <td>INTEGER, INT</td> <td>IntegerType</td> <td></td> </tr> <tr> <td>BIGINT</td> <td>LongType</td> <td></td> </tr> <tr> <td>REAL, DOUBLE PRECISION, FLOAT</td> <td>DoubleType</td> <td></td> </tr> <tr> <td>DECIMAL, NUMERIC, NUMBER</td> <td>DecimalType</td> <td></td> </tr> <tr> <td>DATE</td> <td>DateType</td> <td></td> </tr> <tr> <td>TIMESTAMP, TIMESTAMP WITH TIME ZONE</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>TIMESTAMP, TIMESTAMP WITH TIME ZONE</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>TIME, TIME WITH TIME ZONE</td> <td>TimestampType</td> <td>(Default)preferTimestampNTZ=false or spark.sql.timestampType=TIMESTAMP_LTZ</td> </tr> <tr> <td>TIME, TIME WITH TIME ZONE</td> <td>TimestampNTZType</td> <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td> </tr> <tr> <td>CHARACTER(n), CHAR(n), GRAPHIC(n)</td> <td>CharType(n)</td> <td></td> </tr> <tr> <td>VARCHAR(n), VARGRAPHIC(n)</td> <td>VarcharType(n)</td> <td></td> </tr> <tr> <td>BYTE(n), VARBYTE(n)</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>CLOB</td> <td>StringType</td> <td></td> </tr> <tr> <td>BLOB</td> <td>BinaryType</td> <td></td> </tr> <tr> <td>INTERVAL Data Types</td> <td>-</td> <td>The INTERVAL data types are unknown yet</td> </tr> <tr> <td>Period Data Types, ARRAY, UDT</td> <td>-</td> <td>Not Supported</td> </tr> </tbody> </table>The below table describes the data type conversions from Spark SQL Data Types to Teradata data types, when creating, altering, or writing data to a Teradata table using the built-in jdbc data source with the Teradata JDBC Driver as the activated JDBC Driver.
<table> <thead> <tr> <th><b>Spark SQL Data Type</b></th> <th><b>Teradata Data Type</b></th> <th><b>Remarks</b></th> </tr> </thead> <tbody> <tr> <td>BooleanType</td> <td>CHAR(1)</td> <td></td> </tr> <tr> <td>ByteType</td> <td>BYTEINT</td> <td></td> </tr> <tr> <td>ShortType</td> <td>SMALLINT</td> <td></td> </tr> <tr> <td>IntegerType</td> <td>INTEGER</td> <td></td> </tr> <tr> <td>LongType</td> <td>BIGINT</td> <td></td> </tr> <tr> <td>FloatType</td> <td>REAL</td> <td></td> </tr> <tr> <td>DoubleType</td> <td>DOUBLE PRECISION</td> <td></td> </tr> <tr> <td>DecimalType(p, s)</td> <td>DECIMAL(p,s)</td> <td></td> </tr> <tr> <td>DateType</td> <td>DATE</td> <td></td> </tr> <tr> <td>TimestampType</td> <td>TIMESTAMP</td> <td></td> </tr> <tr> <td>TimestampNTZType</td> <td>TIMESTAMP</td> <td></td> </tr> <tr> <td>StringType</td> <td>VARCHAR(255)</td> <td></td> </tr> <tr> <td>BinaryType</td> <td>BLOB</td> <td></td> </tr> <tr> <td>CharType(n)</td> <td>CHAR(n)</td> <td></td> </tr> <tr> <td>VarcharType(n)</td> <td>VARCHAR(n)</td> <td></td> </tr> </tbody> </table>The Spark Catalyst data types below are not supported with suitable Teradata types.