queermunist she/her

/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!

  • 0 Posts
  • 152 Comments
Joined 2Y ago
cake
Cake day: Jul 10, 2023

help-circle
rss

What is the reason you think philosophy of the mind exists as a field of study?

In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.

What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.


Let me rephrase. If your definition of intelligence includes slime mold then the term is not very useful.

There’s a reason philosophy of the mind exists as a field of study. If we just assign intelligence to anything that can solve problems, which is what you seem to be doing, we are forced to assign intelligence to things which clearly don’t have minds and aren’t aware and can’t think. That’s a problem.


If your definition of intelligence doesn’t include awareness it’s not very useful.


My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.

We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.



So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

What? No.

Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.


My definition of artificial is a system that was consciously engineered by humans.

And humans consciously decided what data to include, consciously created most of the data themselves, and consciously annotated the data for training. Conscious decisions are all over the dataset, even if they didn’t design the neural network directly from the ground up. The system still evolved from conscious inputs, you can’t erase its roots and call it natural.

Human-like object concept representations emerge from datasets made by humans because humans made them.


I’m saying that the terms “natural” and “artificial” are in a dialectical relationship, they define each other by their contradictions. Those words don’t mean anything once you include everything humans do as natural; you’ve effectively defined “artificial” out of existence and as a result also defined “natural” out of existence.


If we define human inputs as “natural” then the word basically ceases to mean anything.

It’s the equivalent of saying that paintings and sculptures emerge naturally because artists are human and humans are natural.


LLMs create a useful representation of the world that is similar to our own when we feed them our human created+human curated+human annotated data. This doesn’t tell us much about the nature of large language models nor the nature of object concept representations, what it tells us is that human inputs result in human-like outputs.

Claims about “nature” are much broader than the findings warrant. We’d need to see LLMs fed entirely non-human datasets (no human creation, no human curation, no human annotation) before we could make claims about what emerges naturally.


I’m not disputing this, but I also don’t see why that’s important.

What’s important the use of “natural” here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:

If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

And we just don’t know this, and this paper doesn’t demonstrate this because (as I’ve said) we aren’t feeding the LLMs raw data from the environment. We’re feeding them inputs from humans and then they’re displaying human-like outputs.

Did you actually read through the paper?

From the paper:

to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?

But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That’s not “natural”, that’s human.

A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

human in ➡️ human out


I didn’t say they’re encoding raw data from nature

Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit.

Anyway, the data they are getting not only comes in a human format. The data we record is only recorded because we find meaningful as humans and most of the data is generated entirely by humans besides. You can’t separate these things; they’re human-like because they’re human-based.

It’s not merely natural. It’s human.

If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

We don’t know that.

We know that LLMs, when fed human-like inputs, produce human-like outputs. That’s it. That tells us more about LLMs and humans than it tells us about nature itself.


LLMs are not getting raw data from nature. They’re being fed data produced by us and uploaded into their database: human writings and human observations and human categorizations and human judgements about what data is valuable. All the data about our reality that we feed them is from a human perspective.

This is a feature, and will make them more useful to us, but I’m just arguing that raw natural data won’t naturally produce human-like outputs. Instead, human inputs produce human-like outputs.


But it’s emerging from networks of data from humans, which means our object concept representation is in the data. This isn’t random data, after all, it comes from us. Seems like the LLMs are just regurgitating what we’re feeding them.

What this shows, I think, is how deeply we are influencing the data we feed to LLMs. They’re human-based models and so they produce human-like outputs.


Isn’t this just because LLMs use the object concept representation data from actual humans?


Big problem with the 3rd world cubical farms - how do you evaluate their performance? You’d have to hire even more people to double-check their work, otherwise people will do the smart thing and cut corners to make their job easier.

Using books is definitely a way to keep out contamination, though.


Where do you get the real data, though? They just scrap data from websites, but now that chatbots have proliferated this will only introduce contaminated data. Keeping it clean would require hiring people to scrub contamination from the data sets.


And yet they never catch up. Why do you think that is?

The goal is to keep them weak and dependent, not help them stand as equals with France and Germany.


It isn’t to help them catch up, it’s to keep them limping along without collapsing entirely, while at the same time keeping them dependent so they don’t think of trying to escape. It has the same function as the IMF/World Bank.


In terms of larger countries, it has been beneficial for France and Germany, less so for Italy

It’s just a way for the wealthy metropoles to turn poorer members of the EU into neocolonies. Yeah, it’s great for rich Europeans! Not so much for everyone else. Without the ability to deficit spend (because they lack currency sovereignity) they are forced to do austerity and privatization. It’s just financial imperialism.

It’s a bad system and it will collapse.


Ask Greece or the other Mediterranean nations if they are better off without their own currency (hint: they absolutely aren’t)

Death to US is a basic statement of understanding that the US empire is the primary contradiction.


The EU as a whole is an interesting project, but the Eurozone currency bloc was a mistake. All it did was surrender everyone else’s currency sovereignty to Germany and France.

And death to the US.


Don’t give up sovereignity, even to allies! Alliances change, but even ignoring that, it’s akin to letting allies run your infrastructure or make your policies or own your water. It’s giving part of yourself away.


Data sovereignty is going to be key to maintaining any sovereignity going forward, it’s so vital to the function of society and the economy that outsourcing it to another country is just giving part of yourself away.


Deepseek, Huawei, Tencent - China isn’t just a factory for the West anymore. They’re not the go-to for advanced tech, but they’ve become a go-to for advanced tech.

I think it’s too late to slow down their growth.


This is classic underdevelopment, like how Europe and the US deliberately withheld vital equipment and machinery from Africa to prevent them from building their own industrial base.

It’s obviously way too late for this tactic to make any sense, though. China can’t be underdeveloped, they’re becoming a tech leader at this point.



As if Google isn’t already crammed with Israeli spies. In fact, that’s probably why an Israeli company just got a sweetheart buy out from Google! This is just a continuation of a pre-existing issue Google already had.


Right, it doesn’t matter if automation is less productive if it’s cheaper. Sure, the robots keep causing damage, but they don’t ask for wages.


Meanwhile here in Iowa our automated materials towing robots keep dumping shit on the floor, running into sensitive equipment, and injuring people.



It certainly increases the legitimacy of Bitcoin and other crypto assets as investment vehicles, I won’t argue with that.

But the original point seemed to be that these are legitimate currencies, and I just don’t see it.


Yes, but even if the US started militarily enforcing bitcoin mining, is the US trying to control the value of Bitcoin, or is it trying to control the value of Bitcoin in USD? Is the US treating bitcoin like a commodity that it needs to control (oil, gas) or like a currency?


Not exactly?

The US uses its military to enforce the US dollar status as the global reserve currency. Libya tried to sell oil outside the US dollar market and look what happened to them.

Unless Trump starts using the military to enforce Bitcoin value it still won’t have the same status as a currency.


So, what, people won’t be allowed to read offline anymore? I hear the high seas calling…


Is it important for humanoid robots to be able to push buttons? That seems like a redundant task imo


Surely it’d be cheaper to just replace the control interface with a computerized one than to install a robot to push the buttons.