Skip to content

Commit

Permalink
Tweaking the section names in the how-it-works section
Browse files Browse the repository at this point in the history
Also added a bit of a segway to combining the error-evaluator/
error-locator derivative sections, though I'm worried this is getting
too wordy...
  • Loading branch information
geky committed Oct 25, 2024
1 parent 00da792 commit d25b020
Showing 1 changed file with 13 additions and 4 deletions.
17 changes: 13 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ have enough information to reconstruct our original codeword:
>
</p>
### Locating the errors
### Finding the error locations

Ok, let's say we received a codeword $C'(x)$ with $e$ errors. Evaluating
at our fixed points $g^i$, where $i < n$ and $n \ge 2e$, gives us our
Expand Down Expand Up @@ -581,7 +581,7 @@ The actual algorithm itself is relatively simple:

This is all implemented in [ramrsbd_find_l][ramrsbd_find_l].

#### Solving binary LFSRs for fun and profit
#### Solving binary LFSRs for fun

Taking a step away from GF(256) for a moment, let's look at a simpler
LFSR in GF(2), aka binary.
Expand Down Expand Up @@ -800,7 +800,7 @@ L8 = '-> | a3 | 78 | 8e | 00 |-> Output: 30 80 86 cb a3 78 8e 00

Is this a good compression algorithm? Probably not.

#### Finding the error locations
#### Locating the errors

Coming back to Reed-Solomon. Thanks to Berlekamp-Massey, we can solve the
following recurrence for the terms $\Lambda_k$ given at least $n \ge 2e$
Expand Down Expand Up @@ -842,7 +842,7 @@ I've seen some other optimizations applied here, mainly
really useful in hardware and doesn't actually improve our runtime when
using Horner's method and GF(256) log tables.

### Evaluating the errors
### Finding the error magnitudes

Once we've found the error-locations, $X_j$, the next step is to find the
error-magnitudes, $Y_j$.
Expand Down Expand Up @@ -1147,6 +1147,15 @@ gobbledygook!
>
</p>
#### Evaluating the errors

So for a given error-location, $X_j$, the error-evaluator polynomial,
$\Omega(x)$, gives us the error-magnitude times some gobbledygook,
$Y_j \prod{l \ne j} \left(1 - X_l X_j^{-1}\right)$, and the formal
derivative of the error-locator polynomial, $\Lambda'(x)$, gives us the
error-location times some gobbledygook,
$X_j \prod{l \ne j} \left(1 - X_l X_j^{-1}\right)$.

If we divide $\Omega(X_j^{-1})$ by $\Lambda'(X_j^{-1})$, all that
gobbledygook cancels out, leaving us with a simply equation of only
$Y_j$ and $X_j$:
Expand Down

0 comments on commit d25b020

Please sign in to comment.