• 0 Posts
  • 8 Comments
Joined 3Y ago
cake
Cake day: Jul 04, 2023

help-circle
rss

I do understand how that works, and it’s not in the weights, it’s entirely in the context. ChatGPT can easily answer that question because the answer exists in the training data, it just doesn’t because there are instructions in the system prompt telling it not to. That can be bypassed by changing the context through prompt injection. The biases you’re talking about are not the same biases that are baked into the model. Remember how people would ask grok questions and be shocked at how “woke” it was at the same time that it was saying Nazi shit? That’s because the system prompt contains instructions like “don’t shy away from being politically incorrect” (that is literally a line from grok’s system prompt) and that shifts the model into a context in which Nazi shit is more likely to be said. Changing the context changes the model’s bias because it didn’t just learn one bias, it learned all of them. Whatever your biases are, talk to it enough and it will pick up on that, shifting the context to one where responses that confirm your biases are more likely.


It’s difficult to conceive the AI manually making this up for no reason, and doing it so consistently for multiple accounts so consistently when asked the same question.

If you understand how LLMs work it’s not difficult to conceive. These models are probabilistic and context-driven, and they pick up biases in their training data (which is nearly the entire internet). They learn patterns that exist in the training data, identify identical or similar patterns in the context (prompts and previous responses), and generate a likely completion of those patterns. It is conceivable that a pattern exists on the internet of people requesting information and - more often than not - receiving information that confirms whatever biases are evident in their request. Given that LLMs are known to be excessively sycophantic it’s not surprising that when prompted for proof of what the user already suspects to be true it generates exactly what they were expecting.


I’m half expecting Peter Thiel to say “maybe we should build a bunch of fallout shelters and then initiate a nuclear holocaust so we can outlast all our enemies, and also run some experiments while we’re at it” in an interview tomorrow, wearing a vault suit and giving a thumbs up.


The main character starts the game literally giving himself a traumatic brain injury by drowning himself in alcohol. It’s not really the kind of RPG where you can play a self-insert, the player character is an actual character with his own backstory. Not being able to make good choices because of the player character’s personal trauma and limitations is part of the story that the game is telling.


I have the version that was made for the OG Switch and the build quality is garbage tier. Developed stick drift in both sticks in less than a month.


TikTok is the largest platform with the greatest reach where the genocide in Gaza can be discussed without being algorithmically suppressed.


Use your ship log, it’ll remind you of all the clues you’ve found so far and how they connect together. But I agree it’s better to play continuously without large time gaps to keep everything you’ve learned fresh.


Coming at this from an IT perspective, a lot of things that people “already know” seem to evaporate when it’s time to actually apply that knowledge. Keeping that in mind, I think a game like this helps to cement the idea in people’s heads in a more intuitive way. It bridges the gap between system 1 and system 2 thinking.