Search code examples
javaapache-sparkrdd

Sorting disordered after joining in Spark RDD


I am trying to get most watched movies from rating dataset and map the corresponding movies name from movies data set with the common movie id. When i join my already sorted movies id of top 10 most watched list is not sorted on end result. I also tried sortbykey(false) which is not working.

JavaRDD<String> movies = sc.textFile("in/ml-1m/ratings.dat");
     // System.out.println(getmovieidanduserid());
     JavaRDD<String> movieid = movies.flatMap(line -> Arrays.asList(line.split("::")[1]).iterator());

 JavaPairRDD<String, Integer> moviemost = movieid.mapToPair(id -> new Tuple2<>(id, 1));

   JavaPairRDD<String, Integer> moviemostlist = moviemost.reduceByKey((x, y) -> x + y);

 JavaPairRDD<Integer, String> countToWordParis = moviemostlist.mapToPair(wordToCount -> new Tuple2<>(wordToCount._2(),
                    wordToCount._1()));
JavaPairRDD<Integer, String> sortedCountToWordParis = countToWordParis.sortByKey(false);

JavaPairRDD<String, Integer> sortedWordToCountPairs = sortedCountToWordParis
                .mapToPair(countToWord -> new Tuple2<>(countToWord._2(), countToWord._1()));

 JavaPairRDD<String, Integer> mostwatched = sc.parallelizePairs(sortedWordToCountPairs.take(10));

System.out.println(sortedWordToCountPairs.take(10));

 for( Tuple2<String, Integer>  mlost : mostwatched.collect())
 {
     System.out.println(mlost._1() + " : " + mlost._2());
    }

 JavaRDD<String> moviesname = sc.textFile("in/ml-1m/movies.dat");

JavaPairRDD<String, String> moviesiduser=moviesname.mapToPair(getPairFunction());
JavaPairRDD<String,Tuple2<Integer,String>> joindata=mostwatched.join(moviesiduser);
System.out.println("-------top movies----------");
System.out.println(joindata.take(10));



for (Tuple2<String, Tuple2<Integer, String>> wordToCount : joindata.collect()) {
System.out.println(wordToCount._1() + " : " + wordToCount._2());
}
    }   


    private static PairFunction<String, String, String> getmovieidanduserid() {
        return (PairFunction<String, String, String>) line -> new Tuple2<>(line.split("::")[1],
                                                                           line.split("::")[0]);
    }

    private static PairFunction<String, String, String> getPairFunction() {
        return (PairFunction<String, String, String>) line -> new Tuple2<>(line.split("::")[0],
                                                                           line.split("::")[1]);
    }

Top 10 movies id and watched count

2858 : 3428
260 : 2991
1196 : 2990
1210 : 2883
480 : 2672
2028 : 2653
589 : 2649
2571 : 2590
1270 : 2583
593 : 2578

After mapping with movie name

593 : (2578,Silence of the Lambs, The (1991))
589 : (2649,Terminator 2: Judgment Day (1991))
480 : (2672,Jurassic Park (1993))
2858 : (3428,American Beauty (1999))
260 : (2991,Star Wars: Episode IV - A New Hope (1977))
2571 : (2590,Matrix, The (1999))
2028 : (2653,Saving Private Ryan (1998))
1270 : (2583,Back to the Future (1985))
1210 : (2883,Star Wars: Episode VI - Return of the Jedi (1983))
1196 : (2990,Star Wars: Episode V - The Empire Strikes Back (1980))

American Beauty is most watched but it came in 4th line


Solution

  • In spark (In fact, almost all SQL-like query engines), join operation does not guarantee the maintaining of ordering. You need to sort after join.