zstd
lzbench
zstd | lzbench | |
---|---|---|
109 | 9 | |
22,581 | 848 | |
0.8% | - | |
9.6 | 1.4 | |
3 days ago | 11 days ago | |
C | C | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zstd
-
Rethinking string encoding: a 37.5% space efficient encoding than UTF-8 in Fury
> In such cases, the serialized binary are mostly in 200~1000 bytes. Not big enough for zstd to work
You're not referring to the same dictionary that I am. Look at --train in [1].
If you have a training corpus of representative data, you can generate a dictionary that you preshare on both sides which will perform much better for very small binaries (including 200-1k bytes).
If you want maximum flexibility (i.e. you don't know the universe of representative messages ahead of time or you want maximum compression performance), you can gather this corpus transparently as messages are generated & then generate a dictionary & attach it as sideband metadata to a message. You'll probably need to defer the decoding if it references a dictionary not yet received (i.e. send delivers messages out-of-order from generation). There are other techniques you can apply, but the general rule is that your custom encoding scheme is unlikely to outperform zstd + a representative training corpus. If it does, you'd need to actually show this rather than try to argue from first principles.
[1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md
-
Drink Me: (Ab)Using a LLM to Compress Text
> Doesn't take large amount of GPU resources
This is an understatement, zstd dictionary compression and decompression are blazingly fast: https://github.com/facebook/zstd/blob/dev/README.md#the-case...
My real-world use case for this was JSON files in a particular schema, and the results were fantastic.
-
SQLite VFS for ZSTD seekable format
This VFS will read a sqlite file after it has been compressed using [zstd seekable format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...). Built to support read-only databases for full-text search. Benchmarks are provided in README.
-
Chrome Feature: ZSTD Content-Encoding
Of course, you may get different results with another dataset.
gzip (zlib -6) [ratio=32%] [compr=35Mo/s] [dec=407Mo/s]
zstd (zstd -2) [ratio=32%] [compr=356Mo/s] [dec=1067Mo/s]
NB1: The default for zstd is -3, but the table only had -2. The difference is probably small. The range is 1-22 for zstd and 1-9 for gzip.
NB2: The default program for gzip (at least with Debian) is the executable from zlib. With my workflows, libdeflate-gzip iscompatible and noticably faster.
NB3: This benchmark is 2 years old. The latest releases of zstd are much better, see https://github.com/facebook/zstd/releases
For a high compression, according to this benchmark xz can do slightly better, if you're willing to pay a 10× penalty on decompression.
xz -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
zstd -9 [ratio=23%] [compr=2.6Mo/s] [dec=88Mo/s]
- Zstandard v1.5.6 – Chrome Edition
-
Optimizating Rabin-Karp Hashing
Compression, synchronization and backup systems often use rolling hash to implement "content-defined chunking", an effective form of deduplication.
In optimized implementations, Rabin-Karp is likely to be the bottleneck. See for instance https://github.com/facebook/zstd/pull/2483 which replaces a Rabin-Karp variant by a >2x faster Gear-Hashing.
- Show HN: macOS-cross-compiler – Compile binaries for macOS on Linux
-
Cyberpunk 2077 dev release
Get the data https://publicdistst.blob.core.windows.net/data/root.tar.zst magnet:?xt=urn:btih:84931cd80409ba6331f2fcfbe64ba64d4381aec5&dn=root.tar.zst How to extract https://github.com/facebook/zstd Linux (debian): `sudo apt install zstd` ``` tar -I 'zstd -d -T0' -xvf root.tar.zst ```
-
Honey, I shrunk the NPM package · Jamie Magee
I've done that experiment with zstd before.
https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md...
Not sure about brotli though.
-
How in the world should we unpack archive.org zst files on Windows?
If you want this functionality in zstd itself, check this out: https://github.com/facebook/zstd/pull/2349
lzbench
-
Chrome Feature: ZSTD Content-Encoding
For a benchmark on a standard set: https://github.com/inikep/lzbench/blob/master/lzbench18_sort...
-
My experience with btrfs so far
Do not re-compress your file into level 3. The decompression speed is largely the same between level 3 and 8, so you just wasting CPU doing nothing and making your files larger. See the bottom of the README: https://github.com/inikep/lzbench
-
Rsyncing 20TB locally
You can crunch the numbers yourself with this: https://github.com/inikep/lzbench
-
Lizard – efficient compression with fast decompression
Note that a benchmark in the README refers to zstd 1.1.1 and brotli 0.5.2, which are very old (the current versions are zstd 1.5.2 and brotli 1.0.9). The same author maintains lzbench [1], which is more or less up-to-date.
[1] https://github.com/inikep/lzbench
- What scientists must know about hardware to write fast code
-
Zip-Ada development on LZMA compression
u/zertillon, maybe you could use lzbench, so you could compare it with a lot of other compression libraries. The problem is that it requires including the library in a single executable, so it might be more difficult to integrate than a C library (the benchmark is in C++).
-
Is there any site that lists the current SOTA for lossless compression?
Still updated: https://github.com/inikep/lzbench
-
will ZSTD impact L2ARC performance?
If you want to know the size a VM will compress to,. Zstd can be installed on any machine, so you can experiment easily. You can even run the benchmark https://github.com/inikep/lzbench
-
Save disk space for your games: BTRFS filesystem compression as alternative to CompactGUI on Linux
Are you sure about that? That's not what I see on https://github.com/inikep/lzbench and I tried to run that myself, although I have no idea which lzo to try so I went with what seemed the fastest...
What are some alternatives?
LZ4 - Extremely Fast Compression algorithm
7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard
Snappy - A fast compressor/decompressor
CompactGUI - Transparently compress active games and programs using Windows 10/11 APIs [Moved to: https://github.com/IridiumIO/CompactGUI]
LZMA - (Unofficial) Git mirror of LZMA SDK releases
CompactGUI - Transparently compress active games and programs using Windows 10/11 APIs
11Zip - Dead simple zipping / unzipping C++ Lib
ZLib - A massively spiffy yet delicately unobtrusive compression library.
qemu
brotli - Brotli compression format
zip-ada - Zip-Ada: a standalone, portable Ada library for .zip archives. Includes LZMA byte stream encoder & decoder pair.