Skip to content

Commit

Permalink
README.md initial LaTeX cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
geky committed Oct 22, 2024
1 parent 68ea4bf commit 2ceaa96
Showing 1 changed file with 56 additions and 29 deletions.
85 changes: 56 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,27 +115,35 @@ If we want to correct $e$ byte-errors, we will need $n = 2e$ fixed
points. We can construct a generator polynomial $P(x)$ with $n$ fixed
points at $g^i$ where $i < n$ like so:

$$
$$\\
P(x) = \prod_{i=0}^n \left(x - X_i\right)
$$

We could choose any arbitrary set of fixed points, but usually we choose
$g^i$ where $g$ is a [generator][generator] in GF(256), since it provides
a convenient mapping of integers to unique non-zero elements in GF(256).

Note that for any fixed point $g^i$, $x - g^i = g^i - g^i = 0$. And
since multiplying anything by zero is zero, this will make our entire
Note that for any fixed point $g^i$:

$$\\
\begin{aligned}
x - g^i &= g^i - g^i \\
&= 0
\end{aligned}
$$

And since multiplying anything by zero is zero, this will make our entire
product zero. So for any fixed point $g^i$, $P(g^i)$ should also evaluate
to zero:

$$
$$\\
P(g^i) = 0
$$

This gets real nifty when you look at the definition of our Reed-Solomon
code for codeword $C(x)$ given a message $M(x)$:

$$
$$\\
C(x) = M(x) x^n - (M(x) x^n \bmod P(x))
$$

Expand All @@ -144,7 +152,7 @@ gives us a polynomial that is a multiple of $P(x)$. And since multiplying
anything by zero is zero, for any fixed point $g^i$, $C(g^i)$ should also
evaluate to zero:

$$
$$\\
C(g^i) = 0
$$

Expand All @@ -156,16 +164,18 @@ We can think of introducing $e$ errors as adding an error polynomial
$E(x)$ to our original codeword, where $E(x)$ contains $e$ non-zero
terms:

$$
$$\\
C'(x) = C(x) + E(x)
$$

Check out what happens if we plug in our fixed point, $g^i$:

$$
C'(g^i) = C(g^i) + E(g^i)
= 0 + E(g^i)
= E(g^i)
$$\\
\begin{aligned}
C'(g^i) &= C(g^i) + E(g^i) \\
&= 0 + E(g^i) \\
&= E(g^i)
\end{aligned}
$$

The original codeword drops out! Leaving us with an equation defined only
Expand All @@ -174,25 +184,27 @@ by the error polynomial.
We call these evaluations our "syndromes" $S_i$, since they tell us
information about the error polynomial:

$$
$$\\
S_i = C'(g^i) = E(g^i)
$$

We can also give the terms of the error polynomial names. Let's call the
$e$ non-zero terms the "error-magnitudes" $Y_j$:

$$
$$\\
E(x) = \sum_{j \in e} Y_j x^j
$$

Plugging in our fixed points $g^i$ gives us another definition of our
syndromes $S_i$, which we can rearrange a bit for simplicity. This
results in another set of terms we call the "error-locators" $X_j=g^j$:

$$
S_i = E(g^i) = \sum_{j \in e} Y_j (g^i)^j
= \sum_{j \in e} Y_j g^{ij}
= \sum_{j \in e} Y_j X_j^i
$$\\
\begin{aligned}
S_i = E(g^i) &= \sum_{j \in e} Y_j (g^i)^j \\
&= \sum_{j \in e} Y_j g^{ij} \\
&= \sum_{j \in e} Y_j X_j^i
\end{aligned}
$$

Note that solving for $X_j$ also gives us our "error-locations" $j$,
Expand All @@ -208,7 +220,7 @@ Ok, let's say we received a codeword $C'(x)$ with $e$ errors. Evaluating
at our fixed points $g^i$, where $i < n$ and $n \ge 2e$, gives us our
syndromes $S_i$:

$$
$$\\
S_i = C'(g^i) = \sum_{j \in e} Y_j X_j^i
$$

Expand All @@ -217,27 +229,42 @@ The next step is figuring our the locations of our errors $X_j$.
To help with this, we introduce another polynomial, the "error-locator
polynomial" $\Lambda(x)$:

$$
```math
\Lambda(x) = \prod_{j \in e} \left(1 - X_j x\right)
$$
```

This polynomial has some rather useful properties:

1. For any $X_j$, $\Lambda(X_j^-1) = 0$.
1. For any $X_j$, $\Lambda(X_j^{-1}) = 0$.

This is for similar reasons why $P(g^i) = 0$. For any $X_j$,
$1 - X_j x = 1 - X_j X_j^-1 = 1 - 1 = 0$. And since multiplying
anything by zero is zero, the product reduces to zero.
This is for similar reasons why $P(g^i) = 0$. For any $X_j$:

<p align="center">
$`
\begin{aligned}
1 - X_j x &= 1 - X_j X_j^-1 \\
&= 1 - 1 \\
&= 0
\end{aligned}
`$
</p>

And since multiplying anything by zero is zero, the product reduces to
zero.

2. $\Lambda(0) = 1$.

This can be seen by plugging in 0:

$$\\
\Lambda(0) = \prod_{j \in e} \left(1 - X_j 0\right)
= \prod_{j \in e} 1
= 1
$$
<p align="center">
$`
\begin{aligned}
\Lambda(0) &= \prod_{j \in e} \left(1 - X_j 0\right) \\
&= \prod_{j \in e} 1 \\
&= 1
\end{aligned}
`$
</p>

This prevents trivial solutions and is what makes $\Lambda(x)$ useful.

Expand Down

0 comments on commit 2ceaa96

Please sign in to comment.