From ebafec440726781627123d22f0b4e034325837f0 Mon Sep 17 00:00:00 2001
From: Christopher Haster
Date: Thu, 31 Oct 2024 14:11:49 -0500
Subject: [PATCH] README.md - Filling out links
---
README.md | 133 ++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 93 insertions(+), 40 deletions(-)
diff --git a/README.md b/README.md
index 538eb1e..54dd93f 100644
--- a/README.md
+++ b/README.md
@@ -88,22 +88,21 @@ $ make test -j
Before we get into how the algorithm works, a couple words of warning:
1. I'm not a mathematician! Some of the definitions here are a bit
- handwavey, and I'm skipping over the history of [BCH][BCH],
- [PGZ][PGZ], [Euclidean methods][Euclidean], etc. I'd encourage you to
- also explore [Wikipedia][wikipedia] and other relevant articles to
- learn more.
+ handwavey, and I'm skipping over the history of [BCH][w-bch] codes,
+ [PGZ][w-pgz], [Euclidean methods][w-euclidean], etc. I'd encourage you
+ to also explore [Wikipedia][w-rs] and other relevant articles to learn
+ more.
My goal is to explain, to the best of my (limited) knowledge, how to
implement Reed-Solomon codes, and how/why they work.
-2. The following math relies heavily on [finite-fields][finite-fields]
- (sometimes called [Galois-fields][finite-fields]) and the related
- theory.
+2. The following math relies heavily on [finite-fields][w-gf] (sometimes
+ called Galois-fields) and the related theory.
If you're not familiar with finite-fields, they are an abstraction we
- can make over finite numbers (bytes for [GF(256)][gf256], bits for
- [GF(2)][gf2]) that let us do most of math without worrying about pesky
- things like integer overflow.
+ can make over finite numbers (bytes for [GF(256)][w-gf256], bits for
+ [GF(2)][w-gf2]) that let us do most of math without worrying about
+ pesky things like integer overflow.
But there's not enough space here to fully explain how they work, so
I'd suggest reading some of the above articles first.
@@ -126,9 +125,9 @@ polynomial", giving us a [systematic code][w-systematic-code].
However, two important differences:
-1. Instead of using a binary polynomial in [GF(2)][gf2], we use a
- polynomial in a higher-order [finite-field][finite-field], usually
- [GF(256)][gf256] because operating on bytes is convenient.
+1. Instead of using a binary polynomial in [GF(2)][w-gf2], we use a
+ polynomial in a higher-order [finite-field][w-gf], usually
+ [GF(256)][w-gf256] because operating on bytes is convenient.
2. We intentionally construct the polynomial to tell us information about
any errors that may occur.
@@ -158,8 +157,9 @@ points at $g^i$ where $i < n$ like so:
We could choose any arbitrary set of fixed points, but usually we choose
-$g^i$ where $g$ is a [generator][generator] in GF(256), since it provides
-a convenient mapping of integers to unique non-zero elements in GF(256).
+$g^i$ where $g$ is a [generator][w-generator] in GF(256), since it
+provides a convenient mapping of integers to unique non-zero elements in
+GF(256).
Note that for any fixed point $g^i$:
@@ -435,10 +435,10 @@ syndromes to solve for $e$ errors at unknown locations.
Ok that's the theory, but solving this system of equations efficiently is
still quite difficult.
-Enter [Berlekamp-Massey][berlekamp-massey].
+Enter [Berlekamp-Massey][w-bm].
A key observation by Massey is that solving for $\Lambda(x)$ is
-equivalent to constructing an LFSR that generates the sequence
+equivalent to constructing an [LFSR][w-lfsr] that generates the sequence
$S_e, S_{e+1}, \dots, S_{n-1}$ given the initial state
$S_0, S_1, \dots, S_{e-1}$:
@@ -454,7 +454,7 @@ $S_0, S_1, \dots, S_{e-1}$:
Pretty wild huh.
-We can describe such an LFSR with a [recurrence relation][recurrence-relation]
+We can describe such an LFSR with a [recurrence relation][w-recurrence-relation]
that might look a bit familiar:
@@ -632,8 +632,8 @@ This is all implemented in [ramrsbd_find_l][ramrsbd_find_l].
#### Solving binary LFSRs for fun
-Taking a step away from GF(256) for a moment, let's look at a simpler
-LFSR in GF(2), aka binary.
+Taking a step away from [GF(256)][w-gf256] for a moment, let's look at a
+simpler LFSR in [GF(2)][w-gf2], aka binary.
Consider this binary sequence generated by a minimal LFSR that I know and
you don't :)
@@ -886,10 +886,10 @@ know $X_j$ is the location of an error:
Wikipedia and other resources often mention an optimization called
-[Chien's search][chiens-search] being applied here, but from reading up
-on the algorithm it seems to only be useful for hardware implementations.
-In software Chien's search doesn't actually improve our runtime over
-brute force with Horner's method and log tables ( $O(ne)$ vs $O(ne)$ ).
+[Chien's search][w-chien] being applied here, but from reading up on the
+algorithm it seems to only be useful for hardware implementations. In
+software Chien's search doesn't actually improve our runtime over brute
+force with Horner's method and log tables ( $O(ne)$ vs $O(ne)$ ).
### Finding the error magnitudes
@@ -921,7 +921,7 @@ and $e$ unknowns, which we can, in theory, solve for:
But again, solving this system of equations is easier said than done.
-Enter [Forney's algorithm][forneys-algorithm].
+Enter [Forney's algorithm][w-forney].
Assuming we know an error-locator $X_j$, the following formula will spit
out an error-magnitude $Y_j$:
@@ -953,7 +953,7 @@ our syndromes are a polynomial now):
>
-And $\Lambda'(x)$, the [formal derivative][formal-derivative] of the
+And $\Lambda'(x)$, the [formal derivative][w-formal-derivative] of the
error-locator, can be calculated like so:
@@ -1000,7 +1000,7 @@ $S_i = \sum_{k \in E} Y_k X_k^i x^i$:
>
-The sum on the right turns out to be a [geometric series][geometric-series]:
+The sum on the right turns out to be a [geometric series][w-geometric-series]: