Skip to content

Commit

Permalink
More GitHub + LaTeX cleanup
Browse files Browse the repository at this point in the history
And some other small tweaks
  • Loading branch information
geky committed Oct 23, 2024
1 parent 57f6a8b commit 8d944a0
Showing 1 changed file with 65 additions and 61 deletions.
126 changes: 65 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ This polynomial has some rather useful properties:

$`
\begin{aligned}
1 - X_j x &= 1 - X_j X_j^-1 \\
1 - X_j x &= 1 - X_j X_j^{-1} \\
&= 1 - 1 \\
&= 0
\end{aligned}
Expand Down Expand Up @@ -736,16 +736,16 @@ Coming back to Reed-Solomon. Thanks to Berlekamp-Massey, we can solve the
following recurrence for the terms $\Lambda_k$ given at least $n \ge 2e$
syndromes $s_i$:

$$
``` math
\Lambda(i) = s_i = \sum_{k=1}^e \Lambda_k s_{i-k}
$$
```

These terms define our error-locator polynomial, which we can use to
find the locations of errors:

$$
``` math
\Lambda(x) = 1 + \sum_{k=1}^e \Lambda_k x^k
$$
```

All we have left to do is figure out where $\Lambda(X_j^{-1})=0$, since
these will be the locations of our errors.
Expand All @@ -766,34 +766,34 @@ magnitudes $Y_j$ is relatively straightforward. Kind of.

Recall the definition of our syndromes $S_i$:

$$
``` math
S_i = \sum_{j \in e} Y_j X_j^i
$$
```

With $e$ syndromes, this can be rewritten as a system of equations with
$e$ equations and $e$ unknowns, our error magnitudes $Y_j$, which we can
solve for:

$$
\begin{bmatrix}
S_0 \\
S_1 \\
\vdots \\
S_{e-1} \\
\end{bmatrix} =
\begin{bmatrix}
1 & 1 & \dots & 1\\
X_{j_0} & X_{j_1} & \dots & X_{j_{e-1}}\\
\vdots & \vdots & \ddots & \vdots \\
X_{j_0}^{e-1} & X_{j_1}^{e-1} & \dots & X_{j_{e-1}}^{e-1}\\
\end{bmatrix}
``` math
\begin{bmatrix}
Y_{j_0}\\
Y_{j_1}\\
\vdots \\
Y_{j_{e-1}}\\
S_0 \\
S_1 \\
\vdots \\
S_{e-1}
\end{bmatrix} =
\begin{bmatrix}
1 & 1 & \dots & 1 \\
X_{j_0} & X_{j_1} & \dots & X_{j_{e-1}} \\
\vdots & \vdots & \ddots & \vdots \\
X_{j_0}^{e-1} & X_{j_1}^{e-1} & \dots & X_{j_{e-1}}^{e-1}
\end{bmatrix}
\begin{bmatrix}
Y_{j_0} \\
Y_{j_1} \\
\vdots \\
Y_{j_{e-1}}
\end{bmatrix}
$$
```

#### Forney's algorithm

Expand All @@ -805,23 +805,23 @@ for $Y_j$ directly, called [Forney's algorithm][forneys-algorithm].
Assuming we know an error-locator $X_j$, plug it into the following
formula to find an error-magnitude $Y_j$:

$$
Y_j = \frac{X_j \Omega(X_j^{-1})}{\Lambda'(X_j^)}
$$
``` math
Y_j = \frac{X_j \Omega(X_j^{-1})}{\Lambda'(X_j^{-1})}
```

Where $\Omega(x)$, called the error-evaluator polynomial, is defined like
so:

$$
``` math
\Omega(x) = S(x) \Lambda(x) \bmod x^n
$$
```

And $\Lambda'(x)$, the [formal derivative][formal-derivative] of the
error-locator, can be calculated by terms like so:

$$
``` math
\Lambda'(x) = \sum_{i=1}^2 i \cdot \Lambda_i x^{i-1}
$$
```

Though note $i$ is not a field element, so multiplication by $i$
represents normal repeated addition. And since addition is xor in our
Expand All @@ -835,37 +835,41 @@ Haha, I know right? Where did this equation come from? How does it work?
How did Forney even come up with this?

To be honest I don't know the answer to most of these questions, there's
very little documentation online about this formula comes from. But at
the very least we can prove that it works.
very little documentation online about where this formula comes from.

But at the very least we can prove that it works.

#### The error-evaluator polynomial

Let us start with the syndrome polynomial $S(x)$:

$$
``` math
S(x) = \sum_{i=0}^n S_i x^i
$$
```

Substituting the definition of $S_i$:

$$
S(x) = \sum_{i=0}^n \sum_{j \in e} Y_j X_j^i x^i
= \sum_{j \in e} \left(Y_j \sum_{i=0}^n X_j^i x^i\right)
$$
``` math
\begin{aligned}
S(x) &= \sum_{i=0}^n \sum_{j \in e} Y_j X_j^i x^i \\
&= \sum_{j \in e} \left(Y_j \sum_{i=0}^n X_j^i x^i\right)
\end{aligned}
```

The sum on the right side turns out to be a geometric series that we can
substitute in:
The sum on the right side turns out to be a [geometric series][geometric-series]:

$$
``` math
S(x) = \sum_{j \in e} Y_j \frac{1 - X_j^n x^n}{1 - X_j x}
$$
```

If we then multiply with our error-locator polynomial $\Lambda(x)$:

$$
S(x)\Lambda(x) = \sum_{j \in e} \left(Y_j \frac{1 - X_j^n x^n}{1 - X_j x}\right) \cdot \prod_{k=0}^e \left(1 - X_k x\right)
= \sum_{j \in e} \left(Y_j \left(1 - X_j^n x^n\right) \prod_{k \ne j} \left(1 - X_k x\right)\right)
$$
``` math
\begin{aligned}
S(x)\Lambda(x) &= \sum_{j \in e} \left(Y_j \frac{1 - X_j^n x^n}{1 - X_j x}\right) \cdot \prod_{k=0}^e \left(1 - X_k x\right) \\
&= \sum_{j \in e} \left(Y_j \left(1 - X_j^n x^n\right) \prod_{k \ne j} \left(1 - X_k x\right)\right)
\end{aligned}
```

We see exactly one term in each summand (TODO summand??) cancel out.

Expand All @@ -874,45 +878,45 @@ thanks to the error-locator polynomial $\Lambda(x)$.

But if we expand the multiplication, something interesting happens:

$$
``` math
S(x)\Lambda(x) = \sum_{j \in e} \left(Y_j \prod_{k \ne j} \left(1 - X_k x\right)\right) - \sum_{j \in e} \left(Y_j X_j^n x^n \prod_{k \ne j} \left(1 - X_k x\right)\right)
$$
```

On the left side of the subtraction, all terms are at _most_ degree
$x^{e-1}$. On the gith side of the subtraction, all terms are at _least_
$x^{e-1}$. On the right side of the subtraction, all terms are at _least_
degree $x^n$.

Imagine how these contribute to the expanded form of the equation:

$$
``` math
S(x)\Lambda(x) = \overbrace{\Omega_0 + \dots + \Omega_{e-1} x^{e-1}}^{\sum_{j \in e} \left(Y_j \prod_{k \ne j} \left(1 - X_k x\right)\right)} + \overbrace{\Omega_n x^n + \dots + \Omega_{n+e-1} x^{n+e-1}}^{\sum_{j \in e} \left(Y_j X_j^n x^n \prod_{k \ne j} \left(1 - X_k x\right)\right) }
$$
```

If we truncate this polynomial, $\bmod n$ in math land, we can
effectively delete part of the equation:

$$
S(x)\Lambda(x) \bmod x^n= \overbrace{\Omega_0 + \dots + \Omega_{e-1} x^{e-1}}^{\sum_{j \in e} \left(Y_j \prod_{k \ne j} \left(1 - X_k x\right)\right)}
$$
``` math
S(x)\Lambda(x) \bmod x^n = \overbrace{\Omega_0 + \dots + \Omega_{e-1} x^{e-1}}^{\sum_{j \in e} \left(Y_j \prod_{k \ne j} \left(1 - X_k x\right)\right)}
```

Giving us the equation for the error-evaluator polynomial $\Omega(x)$:

$$
``` math
\Omega(x) = S(x)\Lambda(x) \bmod x^n = \sum_{j \in e} \left(Y_j \prod_{k \ne j} \left(1 - X_k x\right)\right)
$$
```

What's really neat about the error-evaluator polynomial $\Omega(x)$ is
that $k \ne j$ condition.

The error-evaluator polynomial $\Omega(x)$ still contains a big chunk of
the error-locator polynomial $\Lambda(x)$. If we plug in an
error-location, $X_{j'}_{-1}$, _most_ of the terms evaluate to zero,
error-location, $X_{j'}^{-1}$, _most_ of the terms evaluate to zero,
except the one where $j' \eq j$!

$$
``` math
\Omega(X_{j'}^{-1}) = \sum_{j \in e} \left(Y_j \prod_{k \ne j} \left(1 - X_k X_{j'}^{-1}\right)\right)
= Y_{j'} \prod_{k \ne j'} \left(1 - X_k X_{j'}^{-1}\right)
$$
```

And right there is our error-magnitude, $Y_{j'}$! Sure it's multiplied
with a bunch of gobbledygook, but it is there.
Expand Down

0 comments on commit 8d944a0

Please sign in to comment.