Aprendizaje en Grandes Volúmenes de Datos, Clase 17
por Pablo Ariel Duboue, PhD
disponible bajo licencia CC-BY-SA
Presentación:
* feedback
* alternativa a hadoop que yo recomendaria: message queues. Porque? Mas flexibilidad, funcionan off-line.
* si se pierden o no andan, llamenme
* script de antemano: no, es importante que se tomen el tiempo detipearlo. Cut&paste ea demasiado
En máquina:
# seteando una maquina para pseudo-cluster
In the web: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation
[Following these instructions you can set-up a pseudo-distributed
hadoop in your own machine, in class we logged into a machine --no
longer available-- where the set-up had already taken place, by
executing the code below.]
# note: you need a ssh server running in your machine
PROMPT$ echo 'export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64' >> .profile
PROMPT$ cd hadoop-2.4.1
PROMPT$ echo export HADOOP_PREFIX=$PWD >> ../.profile
PROMPT$ mv *.java hadoop-2.4.1
PROMPT$ echo 'export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar' >> ~/.profile
PROMPT$ ./bin/hadoop com.sun.tools.javac.Main WordCount.java
PROMPT$ jar cf wc.jar WordCount*.class
PROMPT$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
PROMPT$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
PROMPT$ mv lib/native ./lib/native-out
PROMPT$ emacs etc/hadoop/core-site.xml etc/hadoop/hdfs-site.xml etc/hadoop/hadoop-env.sh
# set etc/hadoop/core-site.xml:
fs.defaultFS
hdfs://localhost:9000
# set etc/hadoop/hdfs-site.xml
dfs.replication
1
# set JAVA_HOME etc/hadoop/hadoop-env.sh
PROMPT$ ./bin/hdfs namenode -format
PROMPT$ ./sbin/start-dfs.sh
[You don't need to ssh if setting up in your own machine.]
# ssh -L 50070:localhost:50070 hadoop@arenero.aprendizajengrande.net
# go to http://localhost:50070/
PROMPT$ ./bin/hdfs dfs -mkdir /user
PROMPT$ ./bin/hdfs dfs -df
PROMPT$ wget https://archive.org/download/2013_common_crawl_index_urls/common_crawl_index_urls.bz2
# 21Gb later... 39.7M/s in 11m 22s
PROMPT$ ./bin/hdfs dfs -mkdir /data
PROMPT$ ./bin/hdfs dfs -put common_crawl_index_urls.bz2 /data/
PROMPT$ ./bin/hadoop jar wc.jar WordCount /data/common_crawl_index_urls.bz2 /user/pablo/outputwc
# too long single mode
PROMPT$ zcat /usr/share/doc/*/README* > README
PROMPT$ ./bin/hdfs dfs -put README /data/
PROMPT$ ./bin/hadoop jar wc.jar WordCount /data/README /user/pablo/outputwc2
PROMPT$ ./bin/hdfs dfs -cat /user/pablo/outputwc2/part-r-00000
# setting YARN
PROMTP$ emacs etc/hadoop/mapred-site.xml etc/hadoop/yarn-site.xml etc/hadoop/yarn-env.sh
# in mapred-site.xml set:
mapreduce.framework.name
yarn
# in yarn-site.xml set:
yarn.nodemanager.aux-services
mapreduce_shuffle
# in yarn-env.sh set JAVA_HOME and HADOOP_YARN_USER
PROMPT$ ./sbin/start-yarn.sh
[You don't need to ssh if setting up in your own machine.]
# ssh -L 50070:localhost:50070 -L 8088:localhost:8088 hadoop@arenero.aprendizajengrande.net
# check http://localhost:8088/