diff --git a/README.md b/README.md index 3d22881..802c48c 100644 --- a/README.md +++ b/README.md @@ -831,8 +831,10 @@ using Horner's method and GF(256) log tables. #### Evaluating the errors -Once we've found our error locations $X_j$, solving for the error -magnitudes $Y_j$ is relatively straightforward. Kind of. +Once we've found the error-locations, $X_j$, the next step is to find the +error-magnitudes, $Y_j$. + +This step is relatively straightforward... kind of... Recall the definition of our syndromes $S_i$: @@ -843,9 +845,8 @@ Recall the definition of our syndromes $S_i$: >
-With $e$ syndromes, this can be rewritten as a system of equations with -$e$ equations and $e$ unknowns, our error magnitudes $Y_j$, which we can -solve for: +With $e$ syndromes, this can be rewritten as a system with $e$ equations +and $e$ unknowns, which we can, in theory, solve for:-Where $\Omega(x)$, called the error-evaluator polynomial, is defined like -so: +Where $\Omega(x)$, called the "error-evaluator polynomial", is defined +like so:
+$S(x)$, called the "syndrome polynomial", is defined like so (we just +pretend our syndromes are a polynomial now): + +
+ +
+ And $\Lambda'(x)$, the [formal derivative][formal-derivative] of the -error-locator, can be calculated by terms like so: +error-locator, can be calculated like so:-Though note $i$ is not a field element, so multiplication by $i$ +Though note $k$ is not a field element, so multiplication by $k$ represents normal repeated addition. And since addition is xor in our field, this just cancels out every other term. @@ -902,60 +913,62 @@ The end result is a simple formula for our error-magnitudes $Y_j$. Haha, I know right? Where did this equation come from? How does it work? How did Forney even come up with this? -To be honest I don't know the answer to most of these questions, there's -very little documentation online about where this formula comes from. +I don't know the answer to most of these questions, there's very little +documentation online about where/how/what this formula comes from. -But at the very least we can prove that it works. +But at the very least we can prove that it does work! #### The error-evaluator polynomial -Let us start with the syndrome polynomial $S(x)$: +Let's start with the syndrome polynomial $S(x)$:
-Substituting the definition of $S_i$: +Substituting in the definition of our syndromes, +$S_i = \sum_{j \in E} Y_j X_j^i x^i$:
-The sum on the right side turns out to be a [geometric series][geometric-series]: +The sum on the right turns out to be a [geometric series][geometric-series]:
-If we then multiply with our error-locator polynomial $\Lambda(x)$: +If we then multiply with our error-locator polynomial, $\Lambda(x)$:
-We see exactly one term in each summand (TODO summand??) cancel out. +We see exactly one term in each summand cancel out. -At this point, if we plug in $X_j^{-1}$, this still evaluates to zero -thanks to the error-locator polynomial $\Lambda(x)$. +At this point, if we plug in $X_j^{-1}$, $S(X_j^{-1})\Lambda(X_j^{-1})$ +still evaluates to zero thanks to the error-locator polynomial +$\Lambda(x)$. But if we expand the multiplication, something interesting happens:
@@ -967,18 +980,18 @@ Imagine how these contribute to the expanded form of the equation:
If we truncate this polynomial, $\bmod n$ in math land, we can -effectively delete part of the equation: +effectively delete part of this equation: