There’s plenty of things that turned out to be useful to me in spite of my not recognizing their names or taglines when I first encountered them—so I don’t just assume that anything I’m not already familiar with isn’t “for” me. A brief explanation for non-insiders (or even a mention of what field it’s relevant to) would have been helpful in establishing that.
Skimming through the linked paper, I noticed this:
Scaling beyond a certain point will deteriorate the compression performance since the model parameters need to be accounted for in the compressed output.
So it sounds like the model parameters needed to decompress the file are included in the file itself.
That varies by subreddit, which might actually help in training LLMs to recognize the difference.