This repository contains the implementation of the LZ77 algorithm using a token interface in Rust to track the distance/length of the occurrences. The LZ77 algorithm is a lossless data compression algorithm that replaces repeated occurrences of data with references to a previously occurring copy.
The encode
function takes an input data stream as a byte slice (&[u8]
) and applies the LZ77 algorithm to encode it. It returns a token stream, which represents the encoded data.
fn encode(&self, input: &str) -> Vec<LZ77Token> {}
The decode function takes a token stream generated by the LZ77 encoder and decodes it to retrieve the original data. It returns the decoded data as a byte vector (Vec).
fn decode(&self, data: &Vec<LZ77Token>) -> String {}
You can run this module using: cargo run <string-to-compress>.
> cargo run "aaaa"
Encoded: 2 | Decoded: 4 | Input: 4
> Encoded result: [LZ77Token { length: 0, distance: 0, char: 97 }, LZ77Token { length: 3, distance: 1, char: 0 }]
> Decoded result: "aaaa"
Encoded bytes: 24
Decoded bytes: 24
Execution time: 0.14s
It also has unit test integration for readability.
The code provided in this repository is intended to demonstrate the implementation of the LZ77 algorithm using Rust. It serves as a learning resource and may not have undergone extensive testing or optimization for production use.