Skip to main content

HDFS file commands quick reference

Here's a quick reference of the most frequently used (and useful) HDFS (Hadoop Distributed File System) commands to manage files.

$HADOOP_HOME/bin/hadoop fs -ls /
Lists all of the files in the root HDFS directory.

$HADOOP_HOME/bin/hadoop fs -ls /rawdata/server01
Lists all of the files in the HDSF directory at the given path.

$HADOOP_HOME/bin/hadoop fs -mkdir /rawdata/
Creates a new directory in HDFS.

$HADOOP_HOME/bin/hadoop fs -put /home/user/importdir/*.txt /rawdata/
Copies files from a local directory to HDFS at the specified path. The destination directory should have been created preliminarily.

$HADOOP_HOME/bin/hadoop fs -get /rawdata/test01.txt
/home/user/importdir/

Copies files from HDFS back to the local filesystem.

$HADOOP_HOME/bin/hadoop fs -cp /rawdata/test01.txt /rawdatabackup/test01.txt
Copies files within HDFS.

$HADOOP_HOME/bin/hadoop fs -rm /rawdata/*.txt
Deletes files from a HDFS directory.

$HADOOP_HOME/bin/hadoop fs -rm -r /rawdata
Deletes a HDFS directory and all of its content.

$HADOOP_HOME/bin/hadoop fs -tail /rawdata/test01.txt 
Prints the last kilobyte of a file to the standard output

$HADOOP_HOME/bin/hadoop fs -df
Displays the free HDFS space.

$HADOOP_HOME/bin/hadoop fs -df -h
Displays the free HDFS space in human readable format.

Comments

Popular posts from this blog

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

jOOQ: code generation in Eclipse

jOOQ allows code generation from a database schema through ANT tasks, Maven and shell command tools. But if you're working with Eclipse it's easier to create a new Run Configuration to perform this operation. First of all you have to write the usual XML configuration file for the code generation starting from the database: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <configuration xmlns="http://www.jooq.org/xsd/jooq-codegen-2.0.4.xsd">   <jdbc>     <driver>oracle.jdbc.driver.OracleDriver</driver>     <url>jdbc:oracle:thin:@dbhost:1700:DBSID</url>     <user>DB_FTRS</user>     <password>password</password>   </jdbc>   <generator>     <name>org.jooq.util.DefaultGenerator</name>     <database>       <name>org.jooq.util.oracle.OracleDatabase</name>     ...

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You...