Roberto Tacconelli PRO
robtacconelli
AI & ML interests
None yet
Recent Activity
reacted
to
their post with ๐คฏ about 6 hours ago
๐งฌ Midicoth: diffusion-based lossless compression โ no neural net, no GPU, no training data
What if reverse diffusion could compress text โ without a neural network?
Midicoth brings score-based denoising into classical compression. It treats prior smoothing as forward noise and reverses it with Tweedie's formula on a binary tree โ 3 denoising steps, James-Stein shrinkage, applied after all model blending. ~2,000 lines of C, single CPU core.
Beats every dictionary compressor we tested:
enwik8 (100 MB) โ 1.753 bpb (โ11.9% vs xz, โ15% vs Brotli, โ24.5% vs bzip2)
alice29.txt โ 2.119 bpb (โ16.9% vs xz)
Outperforms xz, zstd, Brotli, bzip2, gzip on all inputs
PAQ/CMIX still win with hundreds of models + LSTMs. LLM compressors win with pre-trained knowledge. Midicoth closes the gap with pure statistics โ no mixer, no gradient descent, just counting.
The Tweedie denoising layer adds 2.3โ2.7% on every file tested โ the most consistent component in the ablation. Adding SSE or logistic mixers made things worse. In the online setting, count-based beats gradient-based.
No external dependencies. Fully deterministic. Bit-exact encode/decode. ~60 KB/s throughput.
๐ป Code: https://github.com/robtacconelli/midicoth
๐ Paper: https://huggingface.co/papers/2603.08771
โญ Space: https://huggingface.co/spaces/robtacconelli/midicoth
If you ever wondered whether diffusion ideas belong in data compression โ here's proof they do. โญ appreciated! reacted
to
their post with ๐ about 7 hours ago
๐งฌ Midicoth: diffusion-based lossless compression โ no neural net, no GPU, no training data
What if reverse diffusion could compress text โ without a neural network?
Midicoth brings score-based denoising into classical compression. It treats prior smoothing as forward noise and reverses it with Tweedie's formula on a binary tree โ 3 denoising steps, James-Stein shrinkage, applied after all model blending. ~2,000 lines of C, single CPU core.
Beats every dictionary compressor we tested:
enwik8 (100 MB) โ 1.753 bpb (โ11.9% vs xz, โ15% vs Brotli, โ24.5% vs bzip2)
alice29.txt โ 2.119 bpb (โ16.9% vs xz)
Outperforms xz, zstd, Brotli, bzip2, gzip on all inputs
PAQ/CMIX still win with hundreds of models + LSTMs. LLM compressors win with pre-trained knowledge. Midicoth closes the gap with pure statistics โ no mixer, no gradient descent, just counting.
The Tweedie denoising layer adds 2.3โ2.7% on every file tested โ the most consistent component in the ablation. Adding SSE or logistic mixers made things worse. In the online setting, count-based beats gradient-based.
No external dependencies. Fully deterministic. Bit-exact encode/decode. ~60 KB/s throughput.
๐ป Code: https://github.com/robtacconelli/midicoth
๐ Paper: https://huggingface.co/papers/2603.08771
โญ Space: https://huggingface.co/spaces/robtacconelli/midicoth
If you ever wondered whether diffusion ideas belong in data compression โ here's proof they do. โญ appreciated! posted an
update
about 7 hours ago
๐งฌ Midicoth: diffusion-based lossless compression โ no neural net, no GPU, no training data
What if reverse diffusion could compress text โ without a neural network?
Midicoth brings score-based denoising into classical compression. It treats prior smoothing as forward noise and reverses it with Tweedie's formula on a binary tree โ 3 denoising steps, James-Stein shrinkage, applied after all model blending. ~2,000 lines of C, single CPU core.
Beats every dictionary compressor we tested:
enwik8 (100 MB) โ 1.753 bpb (โ11.9% vs xz, โ15% vs Brotli, โ24.5% vs bzip2)
alice29.txt โ 2.119 bpb (โ16.9% vs xz)
Outperforms xz, zstd, Brotli, bzip2, gzip on all inputs
PAQ/CMIX still win with hundreds of models + LSTMs. LLM compressors win with pre-trained knowledge. Midicoth closes the gap with pure statistics โ no mixer, no gradient descent, just counting.
The Tweedie denoising layer adds 2.3โ2.7% on every file tested โ the most consistent component in the ablation. Adding SSE or logistic mixers made things worse. In the online setting, count-based beats gradient-based.
No external dependencies. Fully deterministic. Bit-exact encode/decode. ~60 KB/s throughput.
๐ป Code: https://github.com/robtacconelli/midicoth
๐ Paper: https://huggingface.co/papers/2603.08771
โญ Space: https://huggingface.co/spaces/robtacconelli/midicoth
If you ever wondered whether diffusion ideas belong in data compression โ here's proof they do. โญ appreciated! Organizations
None yet