Howto managing tweets saved in #Hadoop using #Apache #Spark

Apache Spark has just passed Hadoop in popolarity on the web (google trends)

My first Apache Spark usage was extracting texts from tweets I’ve been collecting in Hadoop HDFS. My python script was

import json

from pyspark import SparkContext

def valid(tweet):
  return 'text' in tweet

def gettext(line):
  tweet = json.loads(line)
  return tweet['text']

sc = SparkContext(appName="Tweets")
data = sc.textFile("hdfs://*/*/*.gz")

result = data.filter(lambda line: valid(line))\
    .map(lambda tweet: gettext(tweet))

output = result.collect()
for text in output:
    print text.encode('utf-8')

And lunched with

spark-1.1.0> bin/spark-submit --master local[4]

I used Apche Spark 1.1.0 and Apache Hadoop 2.5.2. When I compiled Spark with

mvn -Phadoop-2.5 -Dhadoop.version=2.5.2 -DskipTests -Pyarn -Phive package

I got an error related to protocolBuffer jar release when I tried to read files from Hadoop HDFS

org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$CreateSnapshotRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet

So I changed the pom.xml adding


And, after rebuilding spark, it worked fine.