


Or maybe even every session?
Fuck that… every frame.
Although, that was the experience with Stable Diffusion a few years ago. WAN Video is good enough for remembering the face during the course of the processed length (4-5 seconds or so). But, as soon as it needs any sort of object permanence, like when the face is hidden, suddenly, it doesn’t know what it was looking at any more. Seems like a solvable problem with reference images. I doubt NVIDIA thought that far, though.


I didn’t just ask the ai to do something I new which library I wanted it to use and new what I wanted it to interface with and new exactly what I wanted it to do.
I have no understanding of the language
No shit… you don’t even have an understanding of the English language. No wonder the LLM didn’t understand you.


If you let the AI do the thinking for you, then you’re building AI slop.
No, for major projects, you start out with a plan. I may spend upwards of 2-3 hours just drafting a plan with the LLM, figuring out options, asking questions when it’s an area I don’t have top-familiarity with, crafting what the modules are going to look like. It’s not slop when you’re planning out what to do and what your end result is supposed to be.

People who talk this way have zero experience with actually using LLMs, especially coding models.


We would be faaaaaar less hostile towards copyrights if we had a regular source of RECENT public domain coming out every year.
I’m not saying that it would make GPL or OSS licenses useless. I’m just saying that the motivation and need for those licenses are because we don’t live in a society where freely available media and data are much more commonplace.


but in the end it boils down to being a text prediction machine.
And we’re barely smarter than a bunch of monkeys throw piles of shit at each other. Being reductive about its origins doesn’t really explain anything.
I trust the output as much as a random Stackoverflow reply with no votes :)
Yeah, but that’s why there’s unit tests. Let it run its own tests and solve its own bugs. How many mistakes have you or I made because we hate making unit tests? At least the LLM has no problems writing the tests, after you know it works.


Then they stop bothering to review the code.
This happens with human code reviews all the time.
“I don’t really understand this code, but APPROVE!”
“You need this thing merged today? APPROVE!”
“This code is too long, and it’s almost my lunch break. APPROVE!”
Over and over and over again. The worse thing you can insult me with is take code I spend days working on and approve it five minutes after I submitted it to you.


the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?
When was the last time you coded something perfectly? “If I have to nanny you to make sure you don’t make a mistake, then how are you a useful employee?” See how that doesn’t make sense. There’s a reason why good development shops live on the backs of their code reviews and review practices.
The math ain’t matching on this one.
The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.
There’s also something to be said about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there’s code there. I still have to review it, point out some mistakes, and then go back and refill my drink.
And there’s so much you can customize with personal rules. Don’t like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.
All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn’t believe how many people can’t rubberduck and explain proper concepts to people, much less LLMs.
LLMs are patient. They don’t give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would get tired of your demands after a while, and tell you to fuck off.


However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure.
That’s not completely true. Claude and some of the Chinese coding models have gotten a lot better at creating a good first pass.
That’s also why I like tests. Just force the model to prove that it works.
Oh, you built the thing and think it’s finished? Prove it. Go run it. Did it work? No? Then go fix the bugs. Does it compile now? Cool, run the unit test platform. Got more bugs? Fix them. Now, go write more unit tests to match the bugs you found. You keep running into the same coding issue? Go write some rules for me that tell yourself not to do that shit.
I mean, I’ve been doing this programming shit for many decades, and even I’ve been caught by my overconfidence of trying to write some big project and thinking it’s just going to work the first time. No reason to think even a high-powered Claude thinking model is going to magically just write the whole thing bug-free.



If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.
No, they won’t. This line of thinking is how we got the above.
Their line of work is thankless, and nobody wants to do a fucking thankless job, especially when the last maintainer was given a bunch of shit for it.


In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.
Oh, it’s more than subconscious, as you can see in this thread.
Lutris developer makes a perfectly sane and nuianced response to a reactionary “is lutris slop now” comment, and gets shit on for it, because everybody has to fight in black and white terms. There are no grey opinions, only battle lines to be drawn to these people.
What? Are you all going to shit on your lord and savior Linus himself for also saying he uses LLMs? Oh, what, you didn’t know?!?


Agreed, I don’t understand people not even giving it a chance. They try it for five minutes, it doesn’t do exactly what they want, they give up on it, and shout how shit it is.
Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.
It’s like handing your 90-year-old grandpa the Internet, and they don’t know what the fuck to do with it. It’s so infuriating.
Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just “me needs problem solvey, go do fix thing!”


You could fit all the spent nuclear fuel humanity has ever used into a single swimming pool.
Around the U.S., about 90,000 tons of nuclear waste is stored at over 100 sites in 39 states, in a range of different structures and containers.
I’m fairly sure a swimming pool can’t hold 90 Kilotons of nuclear waste.
Also, not needing enriched uranium is a pretty big deal, considering it’s an expensive process. And just having an enrichment facility is enough for the UN to stop and take notice, start flailing around with their arms in the air, and scream about nuclear weapons projects.
open-weights aren’t open-source.
This always has been a dumb argument, and really lacks any modicum of practicality. This is rejecting 95% of the need because it is not 100% to your liking.
As we’ve seen in the text-to-image/video world, you can train on top of base models just fine. Or create LoRAs for specialization. Or change them into various styles of quantized GGUFs.
Also, you don’t need a Brazilian LLM because all of the LLMs are very multilingual.
Spending $3000 on training is still really cheap, but depending on the size of the model, you can still get away with training on 24GB or 32GB cards, which cost you the price of the card and energy. LoRAs take almost nothing to train. A university that is worth anything is going to have the resources to train a model like that. None of these arguments hold water.
DeepSeek API isn’t free, and to use Qwen you’d have to sign up for Ollama Cloud or something like that
To use Qwen, all you need is a decent video card and a local LLM server like LM Studio.
Local deploying is prohibitive
There’s a shitton of LLM models in various sizes to fit the requirements of your video card. Don’t have the 256GB VRAM requirements for the full quantized 8-bit 235B Qwen3 model? Fine, get the quantized 4-bit 30B model that fits into a 24GB card. Or a Qwen3 8B Base with DeepSeek-R1 post-trained Q 6-bit that fits on a 8GB card.
There are literally hundreds of variations that people have made to fit whatever size you need… because it’s fucking open-source!


This is just this scene from The Jerk in written form.


They are a billion dollar company because they made decisions like these over the last several decades. They could have gone the easy route and make decisions that fuck over the consumer, and make billions of dollars in more insidious ways, but they didn’t.
Steam Deck and their commitment to Proton are the reason why we can even have a conversation like this, talking about the rise of Linux Gaming in the year of our lord 2026. Without those two components, we would still be talking about how Windows 11 is fucking us over (while still using it), how nobody likes to switch to Linux because they still want to play games, how the whole “Year of the Linux Desktop” is the same tired fucking joke it’s been for the last 30 years.
Instead, we’re in the timeline where we have enough Linux gaming developers to form their own fucking collective! Because of Valve!


It’s not easy, but it’s done all the time. New models, new LoRAs, and in some cases, the training data doesn’t even need to be very large for a specific task.
You don’t need the entire training dataset that the model was built from.
A while back, I told myself that I wasn’t going to watch any YouTube video over an hour. But sometimes, really good YouTubers that I respected ended up putting out great two-hour material, like Folding Ideas or RLM or Grimbeard, and I just ended up watching these things, anyway. I might watch it for an hour, switch over to something else, and then watch the rest later. I just did that with the RLM Christmas video they just put out a few hours ago. It’s a hell of a lot better than the fleeting TikTok garbage that is geared towards maximum overstimulation and minimum education.
But, I also watch a lot of YouTube. And I still have my limits. I don’t understand these videos that go up to 5-6 hours. That’s just a lack of restraint in the editing department in my opinion.














Soooo, why aren’t we hacking these guys to oblivion?