docs/sql-data-sources-parquet.md
Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data. When reading Parquet files, all columns are automatically converted to be nullable for compatibility reasons.
Using the data from the above example:
<div class="codetabs"> <div data-lang="python" markdown="1">{% include_example basic_parquet_example python/sql/datasource.py %}
</div> <div data-lang="scala" markdown="1"> {% include_example basic_parquet_example scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %} </div> <div data-lang="java" markdown="1"> {% include_example basic_parquet_example java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %} </div> <div data-lang="r" markdown="1">{% include_example basic_parquet_example r/RSparkSQLExample.R %}
</div> <div data-lang="SQL" markdown="1">{% highlight sql %}
CREATE TEMPORARY VIEW parquetTable USING org.apache.spark.sql.parquet OPTIONS ( path "examples/src/main/resources/people.parquet" )
SELECT * FROM parquetTable
{% endhighlight %}
</div> </div>Table partitioning is a common optimization approach used in systems like Hive. In a partitioned
table, data are usually stored in different directories, with partitioning column values encoded in
the path of each partition directory. All built-in file sources (including Text/CSV/JSON/ORC/Parquet)
are able to discover and infer partitioning information automatically.
For example, we can store all our previously used
population data into a partitioned table using the following directory structure, with two extra
columns, gender and country as partitioning columns:
{% highlight text %}
path └── to └── table ├── gender=male │ ├── ... │ │ │ ├── country=US │ │ └── data.parquet │ ├── country=CN │ │ └── data.parquet │ └── ... └── gender=female ├── ... │ ├── country=US │ └── data.parquet ├── country=CN │ └── data.parquet └── ...
{% endhighlight %}
By passing path/to/table to either SparkSession.read.parquet or SparkSession.read.load, Spark SQL
will automatically extract the partitioning information from the paths.
Now the schema of the returned DataFrame becomes:
{% highlight text %}
root |-- name: string (nullable = true) |-- age: long (nullable = true) |-- gender: string (nullable = true) |-- country: string (nullable = true)
{% endhighlight %}
Notice that the data types of the partitioning columns are automatically inferred. Currently,
numeric data types, date, timestamp and string type are supported. Sometimes users may not want
to automatically infer the data types of the partitioning columns. For these use cases, the
automatic type inference can be configured by
spark.sql.sources.partitionColumnTypeInference.enabled, which is default to true. When type
inference is disabled, string type will be used for the partitioning columns.
Starting from Spark 1.6.0, partition discovery only finds partitions under the given paths
by default. For the above example, if users pass path/to/table/gender=male to either
SparkSession.read.parquet or SparkSession.read.load, gender will not be considered as a
partitioning column. If users need to specify the base path that partition discovery
should start with, they can set basePath in the data source options. For example,
when path/to/table/gender=male is the path of the data and
users set basePath to path/to/table/, gender will be a partitioning column.
Like Protocol Buffer, Avro, and Thrift, Parquet also supports schema evolution. Users can start with a simple schema, and gradually add more columns to the schema as needed. In this way, users may end up with multiple Parquet files with different but mutually compatible schemas. The Parquet data source is now able to automatically detect this case and merge schemas of all these files.
Since schema merging is a relatively expensive operation, and is not a necessity in most cases, we turned it off by default starting from 1.5.0. You may enable it by
mergeSchema to true when reading Parquet files (as shown in the
examples below), orspark.sql.parquet.mergeSchema to true.{% include_example schema_merging python/sql/datasource.py %}
</div> <div data-lang="scala" markdown="1"> {% include_example schema_merging scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %} </div> <div data-lang="java" markdown="1"> {% include_example schema_merging java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %} </div> <div data-lang="r" markdown="1">{% include_example schema_merging r/RSparkSQLExample.R %}
</div> </div>When reading from Hive metastore Parquet tables and writing to non-partitioned Hive metastore
Parquet tables, Spark SQL will try to use its own Parquet support instead of Hive SerDe for
better performance. This behavior is controlled by the spark.sql.hive.convertMetastoreParquet
configuration, and is turned on by default.
There are two key differences between Hive and Parquet from the perspective of table schema processing.
Due to this reason, we must reconcile Hive metastore schema with Parquet schema when converting a Hive metastore Parquet table to a Spark SQL Parquet table. The reconciliation rules are:
Fields that have the same name in both schema must have the same data type regardless of nullability. The reconciled field should have the data type of the Parquet side, so that nullability is respected.
The reconciled schema contains exactly those fields defined in Hive metastore schema.
Spark SQL caches Parquet metadata for better performance. When Hive metastore Parquet table conversion is enabled, metadata of those converted tables are also cached. If these tables are updated by Hive or other external tools, you need to refresh them manually to ensure consistent metadata.
<div class="codetabs"> <div data-lang="python" markdown="1">{% highlight python %}
spark.catalog.refreshTable("my_table") {% endhighlight %}
</div> <div data-lang="scala" markdown="1">{% highlight scala %} // spark is an existing SparkSession spark.catalog.refreshTable("my_table") {% endhighlight %}
</div> <div data-lang="java" markdown="1">{% highlight java %} // spark is an existing SparkSession spark.catalog().refreshTable("my_table"); {% endhighlight %}
</div> <div data-lang="r" markdown="1">{% highlight r %} refreshTable("my_table") {% endhighlight %}
</div> <div data-lang="SQL" markdown="1">{% highlight sql %} REFRESH TABLE my_table; {% endhighlight %}
</div> </div>Since Spark 3.2, columnar encryption is supported for Parquet tables with Apache Parquet 1.12+.
Parquet uses the envelope encryption practice, where file parts are encrypted with "data encryption keys" (DEKs), and the DEKs are encrypted with "master encryption keys" (MEKs). The DEKs are randomly generated by Parquet for each encrypted file/column. The MEKs are generated, stored and managed in a Key Management Service (KMS) of user’s choice. The Parquet Maven repository has a jar with a mock KMS implementation that allows to run column encryption and decryption using a spark-shell only, without deploying a KMS server (download the parquet-hadoop-tests.jar file and place it in the Spark jars folder):
squaresDF.write
.option("parquet.encryption.column.keys" , "keyA:square")
.option("parquet.encryption.footer.key" , "keyB")
.parquet("/path/to/table.parquet.encrypted")
df2 = spark.read.parquet("/path/to/table.parquet.encrypted")
{% endhighlight %}
</div> <div data-lang="scala" markdown="1"> {% highlight scala %}sc.hadoopConfiguration.set("parquet.encryption.kms.client.class" , "org.apache.parquet.crypto.keytools.mocks.InMemoryKMS")
// Explicit master keys (base64 encoded) - required only for mock InMemoryKMS sc.hadoopConfiguration.set("parquet.encryption.key.list" , "keyA:AAECAwQFBgcICQoLDA0ODw== , keyB:AAECAAECAAECAAECAAECAA==")
// Activate Parquet encryption, driven by Hadoop properties sc.hadoopConfiguration.set("parquet.crypto.factory.class" , "org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory")
// Write encrypted dataframe files. // Column "square" will be protected with master key "keyA". // Parquet file footers will be protected with master key "keyB" squaresDF.write. option("parquet.encryption.column.keys" , "keyA:square"). option("parquet.encryption.footer.key" , "keyB"). parquet("/path/to/table.parquet.encrypted")
// Read encrypted dataframe files val df2 = spark.read.parquet("/path/to/table.parquet.encrypted")
{% endhighlight %}
</div> <div data-lang="java" markdown="1"> {% highlight java %}sc.hadoopConfiguration().set("parquet.encryption.kms.client.class" , "org.apache.parquet.crypto.keytools.mocks.InMemoryKMS");
// Explicit master keys (base64 encoded) - required only for mock InMemoryKMS sc.hadoopConfiguration().set("parquet.encryption.key.list" , "keyA:AAECAwQFBgcICQoLDA0ODw== , keyB:AAECAAECAAECAAECAAECAA==");
// Activate Parquet encryption, driven by Hadoop properties sc.hadoopConfiguration().set("parquet.crypto.factory.class" , "org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory");
// Write encrypted dataframe files. // Column "square" will be protected with master key "keyA". // Parquet file footers will be protected with master key "keyB" squaresDF.write(). option("parquet.encryption.column.keys" , "keyA:square"). option("parquet.encryption.footer.key" , "keyB"). parquet("/path/to/table.parquet.encrypted");
// Read encrypted dataframe files Dataset<Row> df2 = spark.read().parquet("/path/to/table.parquet.encrypted");
{% endhighlight %}
</div> </div>The InMemoryKMS class is provided only for illustration and simple demonstration of Parquet encryption functionality. It should not be used in a real deployment. The master encryption keys must be kept and managed in a production-grade KMS system, deployed in user's organization. Rollout of Spark with Parquet encryption requires implementation of a client class for the KMS server. Parquet provides a plug-in interface for development of such classes,
<div data-lang="java" markdown="1"> {% highlight java %}public interface KmsClient { // Wraps a key - encrypts it with the master key. public String wrapKey(byte[] keyBytes, String masterKeyIdentifier);
// Decrypts (unwraps) a key with the master key. public byte[] unwrapKey(String wrappedKey, String masterKeyIdentifier);
// Use of initialization parameters is optional. public void initialize(Configuration configuration, String kmsInstanceID, String kmsInstanceURL, String accessToken); }
{% endhighlight %}
</div>An example of such class for an open source KMS can be found in the parquet-java repository. The production KMS client should be designed in cooperation with organization's security administrators, and built by developers with an experience in access control management. Once such class is created, it can be passed to applications via the parquet.encryption.kms.client.class parameter and leveraged by general Spark users as shown in the encrypted dataframe write/read sample above.
Note: By default, Parquet implements a "double envelope encryption" mode, that minimizes the interaction of Spark executors with a KMS server. In this mode, the DEKs are encrypted with "key encryption keys" (KEKs, randomly generated by Parquet). The KEKs are encrypted with MEKs in KMS; the result and the KEK itself are cached in Spark executor memory. Users interested in regular envelope encryption, can switch to it by setting the parquet.encryption.double.wrapping parameter to false. For more details on Parquet encryption parameters, visit the parquet-hadoop configuration page.
Data source options of Parquet can be set via:
.option/.options methods of
DataFrameReaderDataFrameWriterDataStreamReaderDataStreamWriterOPTIONS clause at CREATE TABLE USING DATA_SOURCE Currently supported modes are:
<ul>
<li><code>EXCEPTION</code>: fails in reads of ancient dates/timestamps that are ambiguous between the two calendars.</li>
<li><code>CORRECTED</code>: loads dates/timestamps without rebasing.</li>
<li><code>LEGACY</code>: performs rebasing of ancient dates/timestamps from the Julian to Proleptic Gregorian calendar.</li>
</ul>
</td>
<td>read</td>
Currently supported modes are:
<ul>
<li><code>EXCEPTION</code>: fails in reads of ancient INT96 timestamps that are ambiguous between the two calendars.</li>
<li><code>CORRECTED</code>: loads INT96 timestamps without rebasing.</li>
<li><code>LEGACY</code>: performs rebasing of ancient timestamps from the Julian to Proleptic Gregorian calendar.</li>
</ul>
</td>
<td>read</td>
Configuration of Parquet can be done via spark.conf.set or by running
SET key=value commands using SQL.
<ul>
<li><code>EXCEPTION</code>: Spark will fail the reading if it sees ancient dates/timestamps that are ambiguous between the two calendars.</li>
<li><code>CORRECTED</code>: Spark will not do rebase and read the dates/timestamps as it is.</li>
<li><code>LEGACY</code>: Spark will rebase dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar to Proleptic Gregorian calendar when reading Parquet files.</li>
</ul>
This config is only effective if the writer info (like Spark, Hive) of the Parquet files is unknown.
<ul>
<li><code>EXCEPTION</code>: Spark will fail the writing if it sees ancient dates/timestamps that are ambiguous between the two calendars.</li>
<li><code>CORRECTED</code>: Spark will not do rebase and write the dates/timestamps as it is.</li>
<li><code>LEGACY</code>: Spark will rebase dates/timestamps from Proleptic Gregorian calendar to the legacy hybrid (Julian + Gregorian) calendar when writing Parquet files.</li>
</ul>
<ul>
<li><code>EXCEPTION</code>: Spark will fail the reading if it sees ancient INT96 timestamps that are ambiguous between the two calendars.</li>
<li><code>CORRECTED</code>: Spark will not do rebase and read the dates/timestamps as it is.</li>
<li><code>LEGACY</code>: Spark will rebase INT96 timestamps from the legacy hybrid (Julian + Gregorian) calendar to Proleptic Gregorian calendar when reading Parquet files.</li>
</ul>
This config is only effective if the writer info (like Spark, Hive) of the Parquet files is unknown.
<ul>
<li><code>EXCEPTION</code>: Spark will fail the writing if it sees ancient timestamps that are ambiguous between the two calendars.</li>
<li><code>CORRECTED</code>: Spark will not do rebase and write the dates/timestamps as it is.</li>
<li><code>LEGACY</code>: Spark will rebase INT96 timestamps from Proleptic Gregorian calendar to the legacy hybrid (Julian + Gregorian) calendar when writing Parquet files.</li>
</ul>