/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!
Let me rephrase. If your definition of intelligence includes slime mold then the term is not very useful.
There’s a reason philosophy of the mind exists as a field of study. If we just assign intelligence to anything that can solve problems, which is what you seem to be doing, we are forced to assign intelligence to things which clearly don’t have minds and aren’t aware and can’t think. That’s a problem.
My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.
We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
What? No.
Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.
My definition of artificial is a system that was consciously engineered by humans.
And humans consciously decided what data to include, consciously created most of the data themselves, and consciously annotated the data for training. Conscious decisions are all over the dataset, even if they didn’t design the neural network directly from the ground up. The system still evolved from conscious inputs, you can’t erase its roots and call it natural.
Human-like object concept representations emerge from datasets made by humans because humans made them.
I’m saying that the terms “natural” and “artificial” are in a dialectical relationship, they define each other by their contradictions. Those words don’t mean anything once you include everything humans do as natural; you’ve effectively defined “artificial” out of existence and as a result also defined “natural” out of existence.
LLMs create a useful representation of the world that is similar to our own when we feed them our human created+human curated+human annotated data. This doesn’t tell us much about the nature of large language models nor the nature of object concept representations, what it tells us is that human inputs result in human-like outputs.
Claims about “nature” are much broader than the findings warrant. We’d need to see LLMs fed entirely non-human datasets (no human creation, no human curation, no human annotation) before we could make claims about what emerges naturally.
I’m not disputing this, but I also don’t see why that’s important.
What’s important the use of “natural” here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:
If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.
And we just don’t know this, and this paper doesn’t demonstrate this because (as I’ve said) we aren’t feeding the LLMs raw data from the environment. We’re feeding them inputs from humans and then they’re displaying human-like outputs.
Did you actually read through the paper?
From the paper:
to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?
But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That’s not “natural”, that’s human.
A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.
human in ➡️ human out
I didn’t say they’re encoding raw data from nature
Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit.
Anyway, the data they are getting not only comes in a human format. The data we record is only recorded because we find meaningful as humans and most of the data is generated entirely by humans besides. You can’t separate these things; they’re human-like because they’re human-based.
It’s not merely natural. It’s human.
If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.
We don’t know that.
We know that LLMs, when fed human-like inputs, produce human-like outputs. That’s it. That tells us more about LLMs and humans than it tells us about nature itself.
LLMs are not getting raw data from nature. They’re being fed data produced by us and uploaded into their database: human writings and human observations and human categorizations and human judgements about what data is valuable. All the data about our reality that we feed them is from a human perspective.
This is a feature, and will make them more useful to us, but I’m just arguing that raw natural data won’t naturally produce human-like outputs. Instead, human inputs produce human-like outputs.
But it’s emerging from networks of data from humans, which means our object concept representation is in the data. This isn’t random data, after all, it comes from us. Seems like the LLMs are just regurgitating what we’re feeding them.
What this shows, I think, is how deeply we are influencing the data we feed to LLMs. They’re human-based models and so they produce human-like outputs.
In terms of larger countries, it has been beneficial for France and Germany, less so for Italy
It’s just a way for the wealthy metropoles to turn poorer members of the EU into neocolonies. Yeah, it’s great for rich Europeans! Not so much for everyone else. Without the ability to deficit spend (because they lack currency sovereignity) they are forced to do austerity and privatization. It’s just financial imperialism.
It’s a bad system and it will collapse.
This is classic underdevelopment, like how Europe and the US deliberately withheld vital equipment and machinery from Africa to prevent them from building their own industrial base.
It’s obviously way too late for this tactic to make any sense, though. China can’t be underdeveloped, they’re becoming a tech leader at this point.
Not exactly?
The US uses its military to enforce the US dollar status as the global reserve currency. Libya tried to sell oil outside the US dollar market and look what happened to them.
Unless Trump starts using the military to enforce Bitcoin value it still won’t have the same status as a currency.
In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.
What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.