Thursday, November 13, 2014

Installing Hadoop on a Raspberry PI cluster

This document details setting up and testing Hadoop on a Raspberry PI cluster. The details are almost exactly the same for any Linux flavor, particularly Debian/Ubuntu. There are only a couple of PI specifics.

In this case, we have a 5-node cluster. If you cluster is a different size, make sure the hadoop distribution is on each of the machines, make sure you update the slaves file, and copy the config files to each machine as shown.
 

Machine Setup

This first thing to do would be to add dns entries for all of your cluster machines. If you have a dns sever available, then that's great. If you don't, then you can edit the hosts file and add the correct entries there. This has to be done for every machine in the cluster. My entries look like this:

10.1.10.120 pi0 master
10.1.10.121 pi1
10.1.10.122 pi2
10.1.10.123 pi3
10.1.10.124 pi4
 
For each machine, the hostname needs to be changed to reflect the hostnames defined above. The default Rasbian hostname is raspberrypi .

Edit the /etc/hostname file and change the entry to the hostname that you want. In my case, the first PI is pi0 . A quick reboot of each node should verify the correct hostname of each node.

 
We want to add a hadoop user to each machine. This is not strictly necessary for hadoop to run, but we do this to keep our hadoop stuff separate from anything else on these machines.

Add a new user to each machine to run hadoop.
pi@pi0 ~ $ sudo adduser hadoop 
Adding user `hadoop' ... 
Adding new group `hadoop' (1004) ... 
Adding new user `hadoop' (1001) with group `hadoop' ... 
Creating home directory `/home/hadoop' ... 
Copying files from `/etc/skel' ... 
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully 
Changing the user information for hadoop 
Enter the new value, or press ENTER for the default 
 Full Name []: Hadoop 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y 
 
 
Log out as the pi user and log back in as the hadoop user on your master node. Create a private identity key and copy to your authorized_keys file.
 
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Copy the identity key to all of your other nodes.
ssh-copy-id -i .ssh/id_dsa.pub hadoop@pi1
ssh-copy-id -i .ssh/id_dsa.pub hadoop@pi2
ssh-copy-id -i .ssh/id_dsa.pub hadoop@pi3
ssh-copy-id -i .ssh/id_dsa.pub hadoop@pi4
 
 


You can also copy the keys directly to the new machine with something like this.

scp .ssh/id_dsa.pub hadoop@pi1:.ssh/authorized_keys
 

Hadoop Setup



Copy the distribution to the hadoop user account and decompress.
I always create symlink to make things a bit easier:
ln -s hadoop-1.x.x hadoop


Now we need to edit some hadoop config files to make everything work.

cd hadoop/conf
ls


This should give you a listing that looks like this:
hadoop@raspberrypi ~/hadoop/conf $ ls 
capacity-scheduler.xml  core-site.xml       hadoop-env.sh               hadoop-policy.xml  
log4j.properties       mapred-site.xml  slaves                  ssl-server.xml.example 
configuration.xsl       fair-scheduler.xml  hadoop-metrics2.properties  hdfs-site.xml      
mapred-queue-acls.xml  masters          ssl-client.xml.example  taskcontroller.cfg 


This files we're interested in are masters and slaves. Add the address of the node that will be the primary NameNode and JobTracker to the masters file. Add the addresses of the nodes that will be the DataNodes and TaskTrackers to the slaves file. You can also add the master node address to the slaves file if you want it to store data and run map reduce jobs, but I'm not doing that in this case. These files looks like this for my cluster:
 
hadoop@raspberrypi ~/hadoop/conf $ more masters 
pi0 
hadoop@raspberrypi ~/hadoop/conf $ more slaves 
pi1
pi2
pi3
pi4 


These files need to be copied to all the nodes in the cluster. The easy way to do this would be as follows:

hadoop@pi0 ~/hadoop/conf $ scp masters slaves pi1:hadoop/conf/. 
masters       100%   12     0.0KB/s   00:00                                        
slaves       100%   48     0.1KB/s   00:00    
hadoop@pi0 ~/hadoop/conf $ scp masters slaves pi2:hadoop/conf/. 
masters       100%   12     0.0KB/s   00:00    
slaves       100%   48     0.1KB/s   00:00    
hadoop@pi0 ~/hadoop/conf $ scp masters slaves pi3:hadoop/conf/. 
masters       100%   12     0.0KB/s   00:00    
slaves       100%   48     0.1KB/s   00:00    
hadoop@pi0 ~/hadoop/conf $ scp masters slaves pi4:hadoop/conf/. 
masters       100%   12     0.0KB/s   00:00    
slaves       100%   48     0.1KB/s   00:00


We need to define the JAVA_HOME setting in hadoop-env.sh file. Since Hadoop runs on java, it needs to know where the executable is. The Oracle JDK comes standard with the latest Rasbian release. We can set JAVA_HOME as follows:

# The java implementation to use.  Required. 
export JAVA_HOME=/usr/lib/jvm/jdk-7-oracle-armhf


Next we need to look at the core-site.xml, hdfs-site.xml and mapred-site.xml files. These are the configuration files for the distributed file system HDFS, and the code that runs the map reduce programs that we want to run. By default, both of these processes put their files in temporary file system space. That's fine to play around with, but as soon as you reboot, all of your data disappears and you have to start over again. We first need to tell each machine where the NameNode is. This is configured in the core-site.xml file.

<property>
 <name>fs.default.name</name>
 <value>hdfs://master:54310</value>
</property>

Next we need to tell each node where the
JobTracker is and a safe place to store data so that it stays permanent. This is configured in the mapred-site.xml file.

<property>
 <name>mapred.job.tracker</name>
 <value>master:54311</value>
</property>
 
<property>
 <name>mapred.local.dir</name>
 <value>/home/hadoop/mapred</value>
</property>


Lastly we need to specify safe directories to store our files for HDFS.

<property>
 <name>dfs.name.dir</name>
 <value>/home/hadoop/name</value>
</property>
<property>
 <name>dfs.data.dir</name>
 <value>/home/hadoop/data</value>
</property> 
 
 
All of these files need to be copied to each of the other nodes in the cluster. We can do that like we did before with the masters and slaves files.
 
hadoop@pi0 ~/hadoop/conf $ scp hadoop-env.sh core-site.xml hdfs-site.xml mapred-site.xml hadoop@pi1:hadoop/conf/. 
hadoop-env.sh       100% 2218     2.2KB/s   00:00    
core-site.xml      100%  268     0.3KB/s   00:00    
hdfs-site.xml      100%  347     0.3KB/s   00:00    
mapred-site.xml      100%  356     0.4KB/s   00:00    
hadoop@pi0 ~/hadoop/conf $ scp hadoop-env.sh core-site.xml hdfs-site.xml mapred-site.xml hadoop@pi2:hadoop/conf/. 
hadoop-env.sh       100% 2218     2.2KB/s   00:00    
core-site.xml      100%  268     0.3KB/s   00:00    
hdfs-site.xml      100%  347     0.3KB/s   00:00    
mapred-site.xml      100%  356     0.4KB/s   00:00    
hadoop@pi0 ~/hadoop/conf $ scp hadoop-env.sh core-site.xml hdfs-site.xml mapred-site.xml hadoop@pi3:hadoop/conf/. 
hadoop-env.sh       100% 2218     2.2KB/s   00:00    
core-site.xml      100%  268     0.3KB/s   00:00    
hdfs-site.xml      100%  347     0.3KB/s   00:00    
mapred-site.xml      100%  356     0.4KB/s   00:00    
hadoop@pi0 ~/hadoop/conf $ scp hadoop-env.sh core-site.xml hdfs-site.xml mapred-site.xml hadoop@pi4:hadoop/conf/. 
hadoop-env.sh       100% 2218     2.2KB/s   00:00    
core-site.xml      100%  268     0.3KB/s   00:00    
hdfs-site.xml      100%  347     0.3KB/s   00:00    
mapred-site.xml      100%  356     0.4KB/s   00:00


We have one interesting task left. The hadoop script by default wants to run the datanode in server mode. The Oracle Java distribution does not support this option. We need to edit the bin/hadoop script to remove this option.

cd bin
vi hadoop

Search for “-server”
Change this:
    HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS" 
to this:
    HADOOP_OPTS="$HADOOP_OPTS $HADOOP_DATANODE_OPTS" 
 
 
We need to copy this file to all of the nodes in the cluster.

hadoop@pi0 ~/hadoop/bin $ scp hadoop hadoop@pi1:hadoop/bin/. 
hadoop       100%   14KB  13.8KB/s   00:00    
hadoop@pi0 ~/hadoop/bin $ scp hadoop hadoop@pi2:hadoop/bin/. 
hadoop       100%   14KB  13.8KB/s   00:00    
hadoop@pi0 ~/hadoop/bin $ scp hadoop hadoop@pi3:hadoop/bin/. 
hadoop       100%   14KB  13.8KB/s   00:00    
hadoop@pi0 ~/hadoop/bin $ scp hadoop hadoop@pi4:hadoop/bin/. 
hadoop       100%   14KB  13.8KB/s   00:00    


Now we're ready to get things up and running! We need to format the NameNode this sets up the storage structure HDFS. This is done with the bin/hadoop namenode -format command as follows:

hadoop@pi0 ~/hadoop $ bin/hadoop namenode -format 
14/11/04 10:53:28 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG:   host = pi0/10.1.10.120 
STARTUP_MSG:   args = [-format] 
STARTUP_MSG:   version = 1.1.1 
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012 
************************************************************/ 
14/11/04 10:53:30 INFO util.GSet: VM type       = 32-bit 
14/11/04 10:53:30 INFO util.GSet: 2% max memory = 19.335 MB 
14/11/04 10:53:30 INFO util.GSet: capacity      = 2^22 = 4194304 entries 
14/11/04 10:53:30 INFO util.GSet: recommended=4194304, actual=4194304 
14/11/04 10:53:34 INFO namenode.FSNamesystem: fsOwner=hadoop 
14/11/04 10:53:35 INFO namenode.FSNamesystem: supergroup=supergroup 
14/11/04 10:53:35 INFO namenode.FSNamesystem: isPermissionEnabled=true 
14/11/04 10:53:35 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100 
14/11/04 10:53:35 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 
14/11/04 10:53:35 INFO namenode.NameNode: Caching file names occuring more than 10 times 
14/11/04 10:53:37 INFO common.Storage: Image file of size 112 saved in 0 seconds. 
14/11/04 10:53:38 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/hadoop/name/current/edits 
14/11/04 10:53:38 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/hadoop/name/current/edits 
14/11/04 10:53:39 INFO common.Storage: Storage directory /home/hadoop/name has been successfully formatted. 
14/11/04 10:53:39 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at pi0/10.1.10.120 
************************************************************/
 
 
Since we set up the SSH login niceness before, we cam start the entire cluster now with a single command!
 
hadoop@pi0 ~/hadoop $ bin/start-all.sh 
starting namenode, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-namenode-pi0.out 
pi1: starting datanode, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-datanode-pi1.out 
pi3: starting datanode, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-datanode-pi3.out 
pi4: starting datanode, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-datanode-pi4.out 
pi2: starting datanode, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-datanode-pi2.out 
pi0: starting secondarynamenode, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-secondarynamenode-pi0.out 
starting jobtracker, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-jobtracker-pi0.out 
pi1: starting tasktracker, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-tasktracker-pi1.out 
pi2: starting tasktracker, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-tasktracker-pi2.out 
pi4: starting tasktracker, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-tasktracker-pi4.out 
pi3: starting tasktracker, logging to /home/hadoop/hadoop-1.1.1/libexec/../logs/hadoop-hadoop-tasktracker-pi3.out 


Open up a browser and go to the following URLs: pi0 is the master or namenode in your cluster.
http://pi0:50070/dfshealth.jsp
http://pi0:50030/jobtracker.jsp

Testing the Setup

Now we can run something simple and quick and make sure things are working.

hadoop@pi0 ~/hadoop $ bin/hadoop jar hadoop-examples-1.1.1.jar pi 4 1000 
Number of Maps  = 4 
Samples per Map = 1000 
Wrote input for Map #0 
Wrote input for Map #1 
Wrote input for Map #2 
Wrote input for Map #3 
Starting Job 
14/11/04 11:50:50 INFO mapred.FileInputFormat: Total input paths to process : 4 
14/11/04 11:50:55 INFO mapred.JobClient: Running job: job_201411041127_0001 
14/11/04 11:50:56 INFO mapred.JobClient:  map 0% reduce 0% 
14/11/04 11:51:49 INFO mapred.JobClient:  map 25% reduce 0% 
14/11/04 11:51:54 INFO mapred.JobClient:  map 100% reduce 0% 
14/11/04 11:52:22 INFO mapred.JobClient:  map 100% reduce 66% 
14/11/04 11:52:25 INFO mapred.JobClient:  map 100% reduce 100% 
14/11/04 11:52:44 INFO mapred.JobClient: Job complete: job_201411041127_0001 
14/11/04 11:52:44 INFO mapred.JobClient: Counters: 30 
14/11/04 11:52:44 INFO mapred.JobClient:   Job Counters 
14/11/04 11:52:44 INFO mapred.JobClient:     Launched reduce tasks=1 
14/11/04 11:52:44 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=129653 
14/11/04 11:52:44 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0 
14/11/04 11:52:44 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0 
14/11/04 11:52:44 INFO mapred.JobClient:     Launched map tasks=4 
14/11/04 11:52:44 INFO mapred.JobClient:     Data-local map tasks=4 
14/11/04 11:52:44 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=34464 
14/11/04 11:52:44 INFO mapred.JobClient:   File Input Format Counters 
14/11/04 11:52:44 INFO mapred.JobClient:     Bytes Read=472 
14/11/04 11:52:44 INFO mapred.JobClient:   File Output Format Counters 
14/11/04 11:52:44 INFO mapred.JobClient:     Bytes Written=97 
14/11/04 11:52:44 INFO mapred.JobClient:   FileSystemCounters 
14/11/04 11:52:44 INFO mapred.JobClient:     FILE_BYTES_READ=94 
14/11/04 11:52:44 INFO mapred.JobClient:     HDFS_BYTES_READ=956 
14/11/04 11:52:44 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=119665 
14/11/04 11:52:44 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=215 
14/11/04 11:52:44 INFO mapred.JobClient:   Map-Reduce Framework 
14/11/04 11:52:44 INFO mapred.JobClient:     Map output materialized bytes=112 
14/11/04 11:52:44 INFO mapred.JobClient:     Map input records=4 
14/11/04 11:52:44 INFO mapred.JobClient:     Reduce shuffle bytes=112 
14/11/04 11:52:44 INFO mapred.JobClient:     Spilled Records=16 
14/11/04 11:52:44 INFO mapred.JobClient:     Map output bytes=72 
14/11/04 11:52:44 INFO mapred.JobClient:     Total committed heap usage (bytes)=819818496 
14/11/04 11:52:44 INFO mapred.JobClient:     CPU time spent (ms)=16580 
14/11/04 11:52:44 INFO mapred.JobClient:     Map input bytes=96 
14/11/04 11:52:44 INFO mapred.JobClient:     SPLIT_RAW_BYTES=484 
14/11/04 11:52:44 INFO mapred.JobClient:     Combine input records=0 
14/11/04 11:52:44 INFO mapred.JobClient:     Reduce input records=8 
14/11/04 11:52:44 INFO mapred.JobClient:     Reduce input groups=8 
14/11/04 11:52:44 INFO mapred.JobClient:     Combine output records=0 
14/11/04 11:52:44 INFO mapred.JobClient:     Physical memory (bytes) snapshot=586375168 
14/11/04 11:52:44 INFO mapred.JobClient:     Reduce output records=0 
14/11/04 11:52:44 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1716187136 
14/11/04 11:52:44 INFO mapred.JobClient:     Map output records=8 
Job Finished in 116.488 seconds 
Estimated value of Pi is 3.14000000000000000000

1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete