0.前言
前面一篇《Hadoop初體驗(yàn):快速搭建Hadoop偽分布式環(huán)境》搭建了一個(gè)Hadoop的環(huán)境,現(xiàn)在就使用Hadoop自帶的wordcount程序來(lái)做單詞統(tǒng)計(jì)的案例。
1.使用示例程序?qū)崿F(xiàn)單詞統(tǒng)計(jì)
(1)wordcount程序
wordcount程序在hadoop的share目錄下,如下:
[root@leaf mapreduce]# pwd /usr/local/hadoop/share/hadoop/mapreduce [root@leaf mapreduce]# ls hadoop-mapreduce-client-app-2.6.5.jar hadoop-mapreduce-client-jobclient-2.6.5-tests.jar hadoop-mapreduce-client-common-2.6.5.jar hadoop-mapreduce-client-shuffle-2.6.5.jar hadoop-mapreduce-client-core-2.6.5.jar hadoop-mapreduce-examples-2.6.5.jar hadoop-mapreduce-client-hs-2.6.5.jar lib hadoop-mapreduce-client-hs-plugins-2.6.5.jar lib-examples hadoop-mapreduce-client-jobclient-2.6.5.jar sources
就是這個(gè)hadoop-mapreduce-examples-2.6.5.jar程序。
(2)創(chuàng)建HDFS數(shù)據(jù)目錄
創(chuàng)建一個(gè)目錄,用于保存MapReduce任務(wù)的輸入文件:
[root@leaf ~]# hadoop fs -mkdir -p /data/wordcount
創(chuàng)建一個(gè)目錄,用于保存MapReduce任務(wù)的輸出文件:
[root@leaf ~]# hadoop fs -mkdir /output
查看剛剛創(chuàng)建的兩個(gè)目錄:
[root@leaf ~]# hadoop fs -ls / drwxr-xr-x - root supergroup 0 2017-09-01 20:34 /data drwxr-xr-x - root supergroup 0 2017-09-01 20:35 /output
(3)創(chuàng)建一個(gè)單詞文件,并上傳到HDFS
創(chuàng)建的單詞文件如下:
[root@leaf ~]# cat myword.txt leaf yyh yyh xpleaf katy ling yeyonghao leaf xpleaf katy
上傳該文件到HDFS中:
[root@leaf ~]# hadoop fs -put myword.txt /data/wordcount
在HDFS中查看剛剛上傳的文件及內(nèi)容:
[root@leaf ~]# hadoop fs -ls /data/wordcount -rw-r--r-- 1 root supergroup 57 2017-09-01 20:40 /data/wordcount/myword.txt [root@leaf ~]# hadoop fs -cat /data/wordcount/myword.txt leaf yyh yyh xpleaf katy ling yeyonghao leaf xpleaf katy
(4)運(yùn)行wordcount程序
執(zhí)行如下命令:
[root@leaf ~]# hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /data/wordcount /output/wordcount ... 17/09/01 20:48:14 INFO mapreduce.Job: Job job_local1719603087_0001 completed successfully 17/09/01 20:48:14 INFO mapreduce.Job: Counters: 38 File System Counters FILE: Number of bytes read=585940 FILE: Number of bytes written=1099502 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=114 HDFS: Number of bytes written=48 HDFS: Number of read operations=15 HDFS: Number of large read operations=0 HDFS: Number of write operations=4 Map-Reduce Framework Map input records=5 Map output records=10 Map output bytes=97 Map output materialized bytes=78 Input split bytes=112 Combine input records=10 Combine output records=6 Reduce input groups=6 Reduce shuffle bytes=78 Reduce input records=6 Reduce output records=6 Spilled Records=12 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=92 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=241049600 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=57 File Output Format Counters Bytes Written=48
(5)查看統(tǒng)計(jì)結(jié)果
如下:
[root@leaf ~]# hadoop fs -cat /output/wordcount/part-r-00000 katy 2 leaf 2 ling 1 xpleaf 2 yeyonghao 1 yyh 2
3.參考資料
http://www.aboutyun.com/thread-7713-1-1.html
另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)scvps.cn,海內(nèi)外云服務(wù)器15元起步,三天無(wú)理由+7*72小時(shí)售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國(guó)服務(wù)器、虛擬主機(jī)、免備案服務(wù)器”等云主機(jī)租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡(jiǎn)單易用、服務(wù)可用性高、性價(jià)比高”等特點(diǎn)與優(yōu)勢(shì),專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場(chǎng)景需求。
分享名稱:運(yùn)行Hadoop自帶的wordcount單詞統(tǒng)計(jì)程序-創(chuàng)新互聯(lián)
分享地址:http://www.chinadenli.net/article40/cogseo.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供ChatGPT、網(wǎng)站設(shè)計(jì)、虛擬主機(jī)、域名注冊(cè)、網(wǎng)站改版、網(wǎng)站策劃
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)
猜你還喜歡下面的內(nèi)容
網(wǎng)頁(yè)設(shè)計(jì)公司知識(shí)