My worst drafts are a 5/10 but I might have lower standards.
@[email protected]
link
fedilink
English
-157d

That means in about 6 months or so the AI content quality will be about an 8/10. The processors spread machine “learning” incredibly fast. Some might even say exponentially fast. Pretty soon it’ll be like that old song “If you wonder why your letters never get a reply, when you tell me that you love me, I want to see you write it”. “Letters” is an old version of one-on-one tweeting, but with no character limit.

@[email protected]
link
fedilink
English
12
edit-2
7d

What improvements have there been in the previous 6 months? From what I’ve seen the AI is still spewing the same 3/10 slop it has since 2021, with maybe one or two improvements bringing it up from 2/10. I’ve heard several people say some newer/bigger models actually got worse at certain tasks, and clean training data is pretty much dried up to even train more models.

I just don’t see any world where scaling up the compute and power usage is going to suddenly improve the quality orders of magnitude. By design LLMs are programmed to output the most statistically likely response, but almost by definition is going to be the most average, bland response possible.

@[email protected]
link
fedilink
English
217d

I love how you idiots think this tech hasn’t already hit its ceiling. It’s been functionally stagnant for some time now.

@[email protected]
link
fedilink
English
-47d

Hey. I’m just one idiot. Who else are you talking about?

@[email protected]
link
fedilink
English
107d

You’re not just one, you’re one of many. All saying the same shit.

@[email protected]
link
fedilink
English
-47d

Lots and lots of people have told me that.

@[email protected]
link
fedilink
English
107d

That tracks.

@[email protected]
link
fedilink
English
16d

People just like you.

@[email protected]
link
fedilink
English
15d

Some folks know what the fuck they’re talking about.

@[email protected]
link
fedilink
English
167d

My hot take. It will never get to even a 6/10. I bet it will just spit out 3/10 faster and faster, most likely.

Skua
link
fedilink
147d

Only if you assume that its performance will continue improving for a good while and (at least) linearly. The companies are really struggling to give their models more compute or more training data now and frankly it doesn’t seem like there have been any big strides for a while

@[email protected]
link
fedilink
English
127d

Yeah… Linear increases in performance appear to require exponentially more data, hardware, and energy.

Meanwhile, the big companies are passing around the same $100bn IOU, amortizing GPUs on 6-year schedules but burning them out in months, using those same GPUs as collateral on massive loans, and spending based on an ever-accelerating number of data centers which are not guaranteed to get built or receive sufficient power.

@[email protected]
link
fedilink
English
57d

I doubt that. A lot of the poor writing quality comes down to choice. All the most powerful models are inherently trained to be bland, seek harmony with the user, and generally come across as kind of slimy in a typically corporate sort of way. This bleeds into the writing style pretty heavily.

A model trained specifically for creative writing without such a focus would probably do better. We’ll see.

@[email protected]
link
fedilink
English
26d

I mean look at a hobby project like neuro sama vs chat gpt.

It’s rather night and day difference in terms of responses and humanity.

While neuro and her sister both come across as autistic 7 year olds. They still come across as mostly human autistic 7 year olds. They have their moments they just lose it, which every LLM has.

But comparing them it’s really really obvious how many of the problems with the inhumanity and blandness is a choice of large companies to have the LLMs be marketable and corpo friendly.

In a world where these models could be trained and allowed to actually have human ish responses and focus on being “normal” instead of sterile robots. They would at least be way more fun.

Not much more reliable mind you. But at least they would be fun.

@[email protected]
link
fedilink
English
47d

Wake me up when that happens. Like literally, @mention me somewhere

Create a post

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let’s Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
  • 1 user online
  • 55 users / day
  • 363 users / week
  • 1.05K users / month
  • 3.01K users / 6 months
  • 1 subscriber
  • 6.89K Posts
  • 53.9K Comments
  • Modlog