- A+
安装spark就省略了,网上很多方法。
test-data.txt文件
- a b c
- aaa
- bbb
- ccc
- a b c
- c
- b
- a
vi wordcount.py
- #!/usr/bin/env python
- #-*-conding:utf-8-*-
- import logging
- from operator import add
- from pyspark import SparkContext
- logging.basicConfig(format='%(message)s', level=logging.INFO)
- #import local file
- test_file_name = "file:///var/lib/hadoop-hdfs/spark_test/test-data.txt"
- out_file_name = "file:///var/lib/hadoop-hdfs/spark_test/spark-out"
- sc = SparkContext("local","wordcount app")
- # text_file rdd object
- text_file = sc.textFile(test_file_name)
- # counts
- counts = text_file.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b)
- counts.saveAsTextFile(out_file_name)
读取文件默认是从hdfs读取文件,也可以指定sc.textFile("路径").在路径前面加上hdfs://表示从hdfs文件系统上读
本地文件读取 sc.textFile("路径").在路径前面加上file:// 表示从本地文件系统读,如file:///home/user/Spark/README.md
运行程序使用spark-submit
$spark-submit /var/lib/hadoop-hdfs/spark_test/wordcount.py
查看结果
cat spark-out/part-00000
- (u'a', 3)
- (u'', 1)
- (u'c', 3)
- (u'b', 3)
- (u'aaa', 1)
- (u'bbb', 1)
- (u'ccc', 1)
参考:http://blog.csdn.NET/vs412237401/article/details/51823228
支付宝打赏
微信打赏
赏