queermunist she/her

/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!

  • 0 Posts
  • 181 Comments
Joined 3Y ago
cake
Cake day: Jul 10, 2023

help-circle
rss

They said “China doesn’t have problems with homeless or unemployment” and that was correct, 5.1% unemployment is full employment. You’re just too up your own ass to communicate with, you just want to “win” instead of talk like a reasonable person. You see me as something inhuman, just another mindless hivemind NPC to crush with your incredible intellect.

This wasn’t a contest and you didn’t win anything, you just ruined a discussion for no reason.


… you’re really proving my point. You don’t care about anything I have to say or any of the facts or data that I quote or link to, you just want to pick and win fights.

It doesn’t matter that I find the LLM bubble stupid and I don’t think we’re even close to AI replacing human labor, because I dared correct you about China. It doesn’t matter that I think China isn’t far enough along the path of socialist development for it to actually be a good thing when AI does replace human labor. It doesn’t matter that my support for China has a lot of caveats and criticisms, some of which do surround queer rights. None of that nuance matters, nothing I say matters, and nothing I believe matters.

All that matters is that I’m in the way and that you need to tear me down so you can win the posting RPG.

There will be a point where it can take our jobs, though. The question is if the AI be privately or collectively owned. If its privately owned, we’re fucked and we’re all going to be turned into paperclips. If its collectively owned then we can decide which jobs are eliminated and the people who have had their jobs eliminated can help decide what they get to do next, now that they’re free.


Do you want to work for a wage for the rest of your life simply to survive? I certainly don’t.

We have to be careful, because if the AI which automates our labor is owned privately then it will be used to eliminate our jobs and then eliminate the surplus population left behind. AI which is owned collectively, however, will liberate us from work.

We just have to get to that stage of collective ownership. China hasn’t reached that stage of socialist development but they’re closer than anyone else and seem to be progressing in that direction. Not that you care, you just want to pick fights.


Anything between 4-6% is generally considered full employment by most countries, including the US. At that level the unemployment mostly consists of people between jobs, rather than people in long-term unemployment where they have no hope of finding work.

The homelessness number you’re referencing is misleading. They’re not unhoused, they’re living outside of their registered address. A day laborer that moves from city to city and stays in group homes is considered “homeless” but they’re not actually living on the street. Much of rural China has yet to be developed to a level that can sustain employment, so they have to migrate.

Not that you care.



That’s another benefit of the TikTok takeover, it frees up other social media sites to crack down even harder because now they don’t need to worry about bleeding their user base.


I’ve spent six years building an AI startup

We should trust this AI huckster that the AI is good now.


Do you think the US acquisition has zero impact on Tiktok globally?


Because the sale of Tiktok was done to suppress Palestinian voices?


How do you think a child would feel after having a pornographic image generated of them and then published on the internet?

Looks like sexual abuse to me.


The people pushing AI don’t like like hearing, seeing, and reading about how other humans experience the world. They actually do just want flashing colors and sounds poured into their face holes. They’re basically incapable of understanding art.



I heard about that! Heat from data centers is harder, unfortunately, because the amount of waste heat generated by a nuclear reactor is far higher than the amount generated by a data center. With smaller quantities of heat come greater costs to recuperate it, at least until they link all the waste-heat sources up into a single network that can collect from multiple data centers.

China is moving fast, though. I bet we’ll see some kind of project like this before the bubble pops in the US.


I feel like dumping “waste” heat is the same as burning off “waste” gas from pumping oil.

It’s not really just a waste byproduct, there just isn’t a profitable way to utilize it. That heat could be used, but we just dump it into the ocean or the atmosphere because it’s cheaper than building municipal heating or recycling it for industrial uses.


I’m looking forward to the podcasts that’ll make fun of it.



For me it was Microsoft’s support for genocide in Palestine.

You want to support that?


They are planning to round people up by the millions and are doing it based on skin color and language. They’re putting them in concentration camps, with plans to build many many more camps to house all the detainees.

It’s an ethnic cleansing campaign.


It is, in fact, illegal for undocumented immigrants to own guns.


ICE is ethnically cleansing the country literally right now.


The secret to increasing profit is to cut costs, always, no matter what. Research? Workforce development? Safety? Just keep cutting costs, shareholders love it!

What do you mean we’re falling behind? We’ve cut so many costs, growth is infinite!



I don’t think they’re trying to get in on the ground floor of the new paradigm, I think they have to make AI work or the whole economy blows up and so they’re going all in.


Beijing is right to see exporting AI hardware and models as leverage. Each Nvidia chip sent abroad is a new point on the board for American software and values. Every U.S.-branded LLM shapes AI norms globally.

Who was it that said you need read business press if you want to know what’s really going on in the world?

This just gives the game away. Imperialism is invisible in the rest of the media, but wsj just come out and say it.



You have to check it every single time, though, erasing any time savings. You’re saving effort, maybe, but not time.


But doesn’t the LLM sometimes churn out tedious garbage that you have to fix, thus not actually saving time?


It’s very relevant, because this is the reason they hallucinate. They’re just completing the pattern, no matter how nonsensical it is, because they don’t even know what they’re doing. They don’t reason, they just regurgitate whatever fits the pattern. There’s no awareness of what they’re saying, there’s just a logic chain that says “if I say this, then I say this next” without concern about context or reality. No reason, no awareness, no thought.

It does not think it’s human. It’s just mindlessly regurgitating words to fit a pattern. That’s it.



What is the reason you think philosophy of the mind exists as a field of study?

In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.

What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.


Let me rephrase. If your definition of intelligence includes slime mold then the term is not very useful.

There’s a reason philosophy of the mind exists as a field of study. If we just assign intelligence to anything that can solve problems, which is what you seem to be doing, we are forced to assign intelligence to things which clearly don’t have minds and aren’t aware and can’t think. That’s a problem.


If your definition of intelligence doesn’t include awareness it’s not very useful.


My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.

We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.



So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

What? No.

Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.


My definition of artificial is a system that was consciously engineered by humans.

And humans consciously decided what data to include, consciously created most of the data themselves, and consciously annotated the data for training. Conscious decisions are all over the dataset, even if they didn’t design the neural network directly from the ground up. The system still evolved from conscious inputs, you can’t erase its roots and call it natural.

Human-like object concept representations emerge from datasets made by humans because humans made them.


I’m saying that the terms “natural” and “artificial” are in a dialectical relationship, they define each other by their contradictions. Those words don’t mean anything once you include everything humans do as natural; you’ve effectively defined “artificial” out of existence and as a result also defined “natural” out of existence.


If we define human inputs as “natural” then the word basically ceases to mean anything.

It’s the equivalent of saying that paintings and sculptures emerge naturally because artists are human and humans are natural.


LLMs create a useful representation of the world that is similar to our own when we feed them our human created+human curated+human annotated data. This doesn’t tell us much about the nature of large language models nor the nature of object concept representations, what it tells us is that human inputs result in human-like outputs.

Claims about “nature” are much broader than the findings warrant. We’d need to see LLMs fed entirely non-human datasets (no human creation, no human curation, no human annotation) before we could make claims about what emerges naturally.


I’m not disputing this, but I also don’t see why that’s important.

What’s important the use of “natural” here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:

If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

And we just don’t know this, and this paper doesn’t demonstrate this because (as I’ve said) we aren’t feeding the LLMs raw data from the environment. We’re feeding them inputs from humans and then they’re displaying human-like outputs.

Did you actually read through the paper?

From the paper:

to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?

But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That’s not “natural”, that’s human.

A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

human in ➡️ human out