From f6bb7d7ff101a40f2ac903fba19c48bbd10d2641 Mon Sep 17 00:00:00 2001 From: Christopher Haster Date: Tue, 22 Oct 2024 16:32:04 -0500 Subject: [PATCH] README.md initial LaTeX cleanup --- README.md | 75 ++++++++++++++++++++++++++++++++++++------------------- 1 file changed, 49 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index ea2e854..4aec0a8 100644 --- a/README.md +++ b/README.md @@ -115,7 +115,7 @@ If we want to correct $e$ byte-errors, we will need $n = 2e$ fixed points. We can construct a generator polynomial $P(x)$ with $n$ fixed points at $g^i$ where $i < n$ like so: -$$ +$$\\ P(x) = \prod_{i=0}^n \left(x - X_i\right) $$ @@ -123,19 +123,27 @@ We could choose any arbitrary set of fixed points, but usually we choose $g^i$ where $g$ is a [generator][generator] in GF(256), since it provides a convenient mapping of integers to unique non-zero elements in GF(256). -Note that for any fixed point $g^i$, $x - g^i = g^i - g^i = 0$. And -since multiplying anything by zero is zero, this will make our entire +Note that for any fixed point $g^i$: + +$$\\ +\begin{aligned} +x - g^i &= g^i - g^i \\ + &= 0 +\end{aligned} +$$ + +And since multiplying anything by zero is zero, this will make our entire product zero. So for any fixed point $g^i$, $P(g^i)$ should also evaluate to zero: -$$ +$$\\ P(g^i) = 0 $$ This gets real nifty when you look at the definition of our Reed-Solomon code for codeword $C(x)$ given a message $M(x)$: -$$ +$$\\ C(x) = M(x) x^n - (M(x) x^n \bmod P(x)) $$ @@ -144,7 +152,7 @@ gives us a polynomial that is a multiple of $P(x)$. And since multiplying anything by zero is zero, for any fixed point $g^i$, $C(g^i)$ should also evaluate to zero: -$$ +$$\\ C(g^i) = 0 $$ @@ -156,16 +164,18 @@ We can think of introducing $e$ errors as adding an error polynomial $E(x)$ to our original codeword, where $E(x)$ contains $e$ non-zero terms: -$$ +$$\\ C'(x) = C(x) + E(x) $$ Check out what happens if we plug in our fixed point, $g^i$: -$$ -C'(g^i) = C(g^i) + E(g^i) - = 0 + E(g^i) - = E(g^i) +$$\\ +\begin{aligned} +C'(g^i) &= C(g^i) + E(g^i) \\ + &= 0 + E(g^i) \\ + &= E(g^i) +\end{aligned} $$ The original codeword drops out! Leaving us with an equation defined only @@ -174,14 +184,14 @@ by the error polynomial. We call these evaluations our "syndromes" $S_i$, since they tell us information about the error polynomial: -$$ +$$\\ S_i = C'(g^i) = E(g^i) $$ We can also give the terms of the error polynomial names. Let's call the $e$ non-zero terms the "error-magnitudes" $Y_j$: -$$ +$$\\ E(x) = \sum_{j \in e} Y_j x^j $$ @@ -189,10 +199,12 @@ Plugging in our fixed points $g^i$ gives us another definition of our syndromes $S_i$, which we can rearrange a bit for simplicity. This results in another set of terms we call the "error-locators" $X_j=g^j$: -$$ -S_i = E(g^i) = \sum_{j \in e} Y_j (g^i)^j - = \sum_{j \in e} Y_j g^{ij} - = \sum_{j \in e} Y_j X_j^i +$$\\ +\begin{aligned} +S_i = E(g^i) &= \sum_{j \in e} Y_j (g^i)^j \\ + &= \sum_{j \in e} Y_j g^{ij} \\ + &= \sum_{j \in e} Y_j X_j^i +\end{aligned} $$ Note that solving for $X_j$ also gives us our "error-locations" $j$, @@ -208,7 +220,7 @@ Ok, let's say we received a codeword $C'(x)$ with $e$ errors. Evaluating at our fixed points $g^i$, where $i < n$ and $n \ge 2e$, gives us our syndromes $S_i$: -$$ +$$\\ S_i = C'(g^i) = \sum_{j \in e} Y_j X_j^i $$ @@ -217,26 +229,37 @@ The next step is figuring our the locations of our errors $X_j$. To help with this, we introduce another polynomial, the "error-locator polynomial" $\Lambda(x)$: -$$ +$$\\ \Lambda(x) = \prod_{j \in e} \left(1 - X_j x\right) $$ This polynomial has some rather useful properties: -1. For any $X_j$, $\Lambda(X_j^-1) = 0$. +1. For any $X_j$, $\Lambda(X_j^{-1}) = 0$. + + This is for similar reasons why $P(g^i) = 0$. For any $X_j$: + + $$\\ + \begin{aligned}\\ + 1 - X_j x &= 1 - X_j X_j^-1 \\ + &= 1 - 1 \\ + &= 0 + \end{aligned} + $$ - This is for similar reasons why $P(g^i) = 0$. For any $X_j$, - $1 - X_j x = 1 - X_j X_j^-1 = 1 - 1 = 0$. And since multiplying - anything by zero is zero, the product reduces to zero. + And since multiplying anything by zero is zero, the product reduces to + zero. 2. $\Lambda(0) = 1$. This can be seen by plugging in 0: $$\\ - \Lambda(0) = \prod_{j \in e} \left(1 - X_j 0\right) - = \prod_{j \in e} 1 - = 1 + \begin{aligned} + \Lambda(0) &= \prod_{j \in e} \left(1 - X_j 0\right) \\ + &= \prod_{j \in e} 1 \\ + &= 1 + \end{aligned} $$ This prevents trivial solutions and is what makes $\Lambda(x)$ useful.