Skip to content

Commit

Permalink
Merge pull request #78 from eecs485staff/docs-custom-partitioner
Browse files Browse the repository at this point in the history
Add custom partitioner to streaming tutorial
  • Loading branch information
noah-weingarden authored Mar 31, 2024
2 parents 8e6c1ce + c5a7b7f commit dca408f
Showing 1 changed file with 34 additions and 0 deletions.
34 changes: 34 additions & 0 deletions README_Hadoop_Streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,3 +340,37 @@ def reduce_one_group(key, group):
for line in group:
pass # Do something
```

## Custom partitioner
If you need to specify which key-value pairs are sent to which reducers, you can create a custom partitioner. Here's a sample which works with our word count example.
```python
#!/usr/bin/env -S python3 -u
"""Word count partitioner."""
import sys


num_reducers = int(sys.argv[1])


for line in sys.stdin:
key, value = line.split("\t")
if key[0] <= "G":
print(0 % num_reducers)
else:
print(1 % num_reducers)
```

Each line of output from the mappers is streamed to this partition file, and the number of reducers is set to `sys.argv[1]`. For each line, the partitioner checks whether the first letter of the key is less than or greater than "G". If it's less than "G", the line is sent to the first reducer, and if it's greater, the line is sent to the second reducer.

Use the `-partitioner` command-line argument to tell Madoop to use this partitioner.

```console
$ madoop \
-input example/input \
-output example/output \
-mapper example/map.py \
-reducer example/reduce.py \
-partitioner example/partition.py
```

This feature is similar to Hadoop's [`Partitioner` class](https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Partitioner.html), although it is not directly compatible with Hadoop. The main difference is that Hadoop only allows partitioners to be a Java class, while Madoop allows any executable that reads from `stdin` and writes to `stdout`.

0 comments on commit dca408f

Please sign in to comment.