Back to Spark

Text Files

docs/sql-data-sources-text.md

4.1.12.2 KB
Original Source

Spark SQL provides spark.read().text("file_name") to read a file or directory of text files into a Spark DataFrame, and dataframe.write().text("path") to write to a text file. When reading a text file, each line becomes each row that has string "value" column by default. The line separator can be changed as shown in the example below. The option() function can be used to customize the behavior of reading or writing, such as controlling behavior of the line separator, compression, and so on.

<div class="codetabs"> <div data-lang="python" markdown="1"> {% include_example text_dataset python/sql/datasource.py %} </div> <div data-lang="scala" markdown="1"> {% include_example text_dataset scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %} </div> <div data-lang="java" markdown="1"> {% include_example text_dataset java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %} </div> </div>

Data Source Option

Data source options of text can be set via:

<table> <thead><tr><th><b>Property Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead> <tr> <td><code>wholetext</code></td> <td><code>false</code></td> <td>If true, read each file from input path(s) as a single row.</td> <td>read</td> </tr> <tr> <td><code>lineSep</code></td> <td><code>\r</code>, <code>\r\n</code>, <code>\n</code> (for reading), <code>\n</code> (for writing)</td> <td>Defines the line separator that should be used for reading or writing.</td> <td>read/write</td> </tr> <tr> <td><code>compression</code></td> <td>(none)</td> <td>Compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, bzip2, gzip, lz4, snappy and deflate).</td> <td>write</td> </tr> </table> Other generic options can be found in <a href="https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html"> Generic File Source Options</a>.