• 0 Posts
  • 9 Comments
Joined 2Y ago
cake
Cake day: Jun 13, 2023

help-circle
rss

That varies by subreddit, which might actually help in training LLMs to recognize the difference.


That will remove your account from public view, but will it remove it from the data they use for AI training?

If not, you’re just enhancing the value of their proprietary data.


At some point someone’s going to train an LLM on material from successful scams to autonomously generate new scams, then wire the money to server farms to run more copies of itself.


But how will we automate our trolley problems?


Marching up to the next non-empty key would skew the distribution—pages preceded by more empty keys would show up more often under “random”.


There’s plenty of things that turned out to be useful to me in spite of my not recognizing their names or taglines when I first encountered them—so I don’t just assume that anything I’m not already familiar with isn’t “for” me. A brief explanation for non-insiders (or even a mention of what field it’s relevant to) would have been helpful in establishing that.


Skimming through the linked paper, I noticed this:

Scaling beyond a certain point will deteriorate the compression performance since the model parameters need to be accounted for in the compressed output.

So it sounds like the model parameters needed to decompress the file are included in the file itself.


Seems awful weird to me that twitter, facebook, and Reddit have all had similar types of issues recently and resulted in dramatic user loss.

I think the “enshittification” theory is a more likely explanation.


Is the reviewer just now encountering the internet for the first time?