diff --git a/README_Hadoop_Streaming.md b/README_Hadoop_Streaming.md index 7dab572..0fadafa 100644 --- a/README_Hadoop_Streaming.md +++ b/README_Hadoop_Streaming.md @@ -340,3 +340,37 @@ def reduce_one_group(key, group): for line in group: pass # Do something ``` + +## Custom partitioner +If you need to specify which key-value pairs are sent to which reducers, you can create a custom partitioner. Here's a sample which works with our word count example. +```python +#!/usr/bin/env -S python3 -u +"""Word count partitioner.""" +import sys + + +num_reducers = int(sys.argv[1]) + + +for line in sys.stdin: + key, value = line.split("\t") + if key[0] <= "G": + print(0 % num_reducers) + else: + print(1 % num_reducers) +``` + +Each line of output from the mappers is streamed to this partition file, and the number of reducers is set to `sys.argv[1]`. For each line, the partitioner checks whether the first letter of the key is less than or greater than "G". If it's less than "G", the line is sent to the first reducer, and if it's greater, the line is sent to the second reducer. + +Use the `-partitioner` command-line argument to tell Madoop to use this partitioner. + +```console +$ madoop \ + -input example/input \ + -output example/output \ + -mapper example/map.py \ + -reducer example/reduce.py \ + -partitioner example/partition.py +``` + +This feature is similar to Hadoop's [`Partitioner` class](https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Partitioner.html), although it is not directly compatible with Hadoop. The main difference is that Hadoop only allows partitioners to be a Java class, while Madoop allows any executable that reads from `stdin` and writes to `stdout`.