ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

By June, “for reasons that are not clear,” ChatGPT stopped showing its step-by-step reasoning.

Southern Wolf
link
fedilink
English
4
edit-2
1Y

This has already been disproven, due to the fact the method the researchers used to test how well it was doing was flawed to begin with. Here is a pretty good twitter-thread showing why the methods they used were flawed: https://twitter.com/svpino/status/1682051132212781056

TL:DR: They used an approach of only giving it prime numbers, and asking it if they were prime numbers. They didn’t intersperse prime and non-prime numbers to really test it’s capabilities at determining that. Turns out that if you do that, both the early and current versions of GPT4 are equally bad at determining prime numbers, with effectively no change noted between the versions.

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 38 users / day
  • 149 users / week
  • 307 users / month
  • 2.32K users / 6 months
  • 1 subscriber
  • 3.01K Posts
  • 43.4K Comments
  • Modlog