python/docs/source/getting_started/quickstart_ps.ipynb
This is a short introduction to pandas API on Spark, geared mainly for new users. This notebook shows you some key differences between pandas and pandas API on Spark. You can run this examples by yourself in 'Live Notebook: pandas API on Spark' at the quickstart page.
Customarily, we import pandas API on Spark as follows:
import pandas as pd
import numpy as np
import pyspark.pandas as ps
from pyspark.sql import SparkSession
Creating a pandas-on-Spark Series by passing a list of values, letting pandas API on Spark create a default integer index:
s = ps.Series([1, 3, 5, np.nan, 6, 8])
s
Creating a pandas-on-Spark DataFrame by passing a dict of objects that can be converted to series-like.
psdf = ps.DataFrame(
{'a': [1, 2, 3, 4, 5, 6],
'b': [100, 200, 300, 400, 500, 600],
'c': ["one", "two", "three", "four", "five", "six"]},
index=[10, 20, 30, 40, 50, 60])
psdf
Creating a pandas DataFrame by passing a numpy array, with a datetime index and labeled columns:
dates = pd.date_range('20130101', periods=6)
dates
pdf = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
pdf
Now, this pandas DataFrame can be converted to a pandas-on-Spark DataFrame
psdf = ps.from_pandas(pdf)
type(psdf)
It looks and behaves the same as a pandas DataFrame.
psdf
Also, it is possible to create a pandas-on-Spark DataFrame from Spark DataFrame easily.
Creating a Spark DataFrame from pandas DataFrame
spark = SparkSession.builder.getOrCreate()
sdf = spark.createDataFrame(pdf)
sdf.show()
Creating pandas-on-Spark DataFrame from Spark DataFrame.
psdf = sdf.pandas_api()
psdf
Having specific dtypes . Types that are common to both Spark and pandas are currently supported.
psdf.dtypes
Here is how to show top rows from the frame below.
Note that the data in a Spark dataframe does not preserve the natural order by default. The natural order can be preserved by setting compute.ordered_head option but it causes a performance overhead with sorting internally.
psdf.head()
Displaying the index, columns, and the underlying numpy data.
psdf.index
psdf.columns
psdf.to_numpy()
Showing a quick statistic summary of your data
psdf.describe()
Transposing your data
psdf.T
Sorting by its index
psdf.sort_index(ascending=False)
Sorting by value
psdf.sort_values(by='B')
Pandas API on Spark primarily uses the value np.nan to represent missing data. It is by default not included in computations.
pdf1 = pdf.reindex(index=dates[0:4], columns=list(pdf.columns) + ['E'])
pdf1.loc[dates[0]:dates[1], 'E'] = 1
psdf1 = ps.from_pandas(pdf1)
psdf1
To drop any rows that have missing data.
psdf1.dropna(how='any')
Filling missing data.
psdf1.fillna(value=5)
Performing a descriptive statistic:
psdf.mean()
Various configurations in PySpark could be applied internally in pandas API on Spark. For example, you can enable Arrow optimization to hugely speed up internal pandas conversion. See also <a href="https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html">PySpark Usage Guide for Pandas with Apache Arrow</a> in PySpark documentation.
prev = spark.conf.get("spark.sql.execution.arrow.pyspark.enabled") # Keep its default value.
ps.set_option("compute.default_index_type", "distributed") # Use default index prevent overhead.
import warnings
warnings.filterwarnings("ignore") # Ignore warnings coming from Arrow optimizations.
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", True)
%timeit ps.range(300000).to_pandas()
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", False)
%timeit ps.range(300000).to_pandas()
ps.reset_option("compute.default_index_type")
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", prev) # Set its default value back.
By “group by” we are referring to a process involving one or more of the following steps:
psdf = ps.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B': ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C': np.random.randn(8),
'D': np.random.randn(8)})
psdf
Grouping and then applying the sum() function to the resulting groups.
psdf.groupby('A').sum()
Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.
psdf.groupby(['A', 'B']).sum()
pser = pd.Series(np.random.randn(1000),
index=pd.date_range('1/1/2000', periods=1000))
psser = ps.Series(pser)
psser = psser.cummax()
psser.plot()
On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:
pdf = pd.DataFrame(np.random.randn(1000, 4), index=pser.index,
columns=['A', 'B', 'C', 'D'])
psdf = ps.from_pandas(pdf)
psdf = psdf.cummax()
psdf.plot()
For more details, Plotting documentation.
CSV is straightforward and easy to use. See here to write a CSV file and here to read a CSV file.
psdf.to_csv('foo.csv')
ps.read_csv('foo.csv').head(10)
Parquet is an efficient and compact file format to read and write faster. See here to write a Parquet file and here to read a Parquet file.
psdf.to_parquet('bar.parquet')
ps.read_parquet('bar.parquet').head(10)
In addition, pandas API on Spark fully supports Spark's various datasources such as ORC and an external datasource. See here to write it to the specified datasource and here to read it from the datasource.
psdf.spark.to_spark_io('zoo.orc', format="orc")
ps.read_spark_io('zoo.orc', format="orc").head(10)
See the Input/Output documentation for more details.