docs/interpreter/hdfs.md
{% include JB/setup %}
Hadoop File System is a distributed, fault tolerant file system part of the hadoop project and is often used as storage for distributed processing engines like Hadoop MapReduce and Apache Spark or underlying file systems like Alluxio.
This interpreter connects to HDFS using the HTTP WebHDFS interface. It supports the basic shell file commands applied to HDFS, it currently only supports browsing.
Tip : Use ( Ctrl + . ) for autocompletion.
In a notebook, to enable the HDFS interpreter, click the Gear icon and select HDFS.
You can confirm that you're able to access the WebHDFS API by running a curl command against the WebHDFS end point provided to the interpreter.
Here is an example:
$> curl "http://localhost:50070/webhdfs/v1/?op=LISTSTATUS"