Unknown Chinese Phonemaker
Shit title and really highlights Bloomberg’s western supremacist views. Unknown to who? Clearly tons of people know about it if it took over Africa.
They have different brands than us because they’re a completely different region and market? Nope, never heard of it so it must be unknown.
My new baseless theory: We know that AI is trained on tons of novels and fictional stories. Is it possible that because all novels have significant conflicts and drama, and stories where some person just boringly does his boring job forever aren’t exactly bestsellers, the AI is maybe trying to inject drama even when it makes no sense, since it’s been conditioned that way through the training data? So it’s seeing these inconsequential issues and since every novel it’s ever “read” turns them into massive conflicts, it’s trying to follow suit?
In the same way your fridge needs a web browser.
Though the point of this is probably not that it will be a viable product, but managing a vending machine is one of those seemingly easy and straightforward tasks that make good starting applications to test the AI with. Basically, if it can’t even handle something as simple as a vending machine, it definitely can’t be trusted with anything more complex.
You can’t put toothpaste back in the tube. The only question going forward is how AI will be developed and who will control it.
Fair enough, but even if the model is open source, you still have no control or knowledge of how it was developed or what biases it might have baked in. AI is by definition a black box, even to the people who made it, it can’t even be decompiled like a normal program.
It’s funny that you’d bring up the drug analogy because you’re advocating a war on drugs here.
I mean, China has the death penalty for drug distribution, which is supported by the majority of Chinese citizens. They do seem more tolerant of drug users compared to the US (I’ve never done drugs in China nor the US so I wouldn’t know), so clearly the decision to have zero tolerance for distributors is a very intentional action by the Communist party. As far as I know, no socialist country has ever been tolerant to even the distribution of cannabis, let alone hard drugs, and they have made it pretty clear that they never will.
Personally, I have absolutely no problem with that if the model is itself open and publicly owned. I’m a communist, I don’t support copyrights and IP laws in principle. The ethical objection to AI training on copyrighted material holds superficial validity, but only within capitalism’s warped logic. Intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists.
I never thought of it in terms of copyright infringement, but in terms of reaping the labour of proletarians while giving them nothing in return. I’m admittedly far less experienced of a communist than you, but I see AI as the ultimate means of removing workers from their means of production because it’s scraping all of humanity’s intellectual labour without consent, to create a product that is inferior to humans in every way except for how much you have to pay it, and it’s only getting the hype it’s getting because the bourgeoisie see it as a replacement for the very humans it exploited.
For the record, I give absolutely no shits about pirating movies or “stealing” content from any of the big companies, but I personally hold the hobby work of a single person in higher regard. It’s especially unfair to the smallest content creators because they are most likely making literally nothing from their work since the vast majority of personal projects are uploaded for free on the public internet. It’s therefore unjust (at least to me) to appropriate their free work into something whose literal purpose is to get companies out of paying people for content. Imagine working your whole life on open source projects only for no company to want to hire you because they’re using AI trained on your open source work to do what they would have paid you to do. Imagine writing novels your whole life and putting them online for free, only for no publisher to want to pay for your work because they have a million AI monkeys trained on your writing typing out random slop and essentially brute forcing a best seller. Open source models won’t prevent this from happening, in fact it will only make it easier.
AI sounds great in an already communist society, but in a capitalist one, it seems to me like it would be deadly to the working class, because capitalists have made it clear that they intend to use it to eliminate human workers.
Again, I don’t know nearly as much about communism as you so most of this is probably wrong, but I am expressing my opinions as is because I want you to examine them and call me out where I’m wrong.
[Linked article] M3 Ultra Runs DeepSeek R1 With 671 Billion Parameters Using 448GB Of Unified Memory, Delivering High Bandwidth Performance At Under 200W Power Consumption, With No Need For A Multi-GPU Setup
Running the AI is not where the power demand comes from, it’s training the AI. Which, if you trained it only once it wouldn’t be so bad, but obviously every AI vendor will be training all the time to ensure their model stays competitive. That’s when you get into the tragedy of the commons situation where the collective power consumption goes out of control for tiny improvements in the AI model.
Meanwhile, corps clearly don’t care about IP here and will keep developing this tech regardless of how ethical it is.
“It will happen anyway” is not an excuse to not try to stop it. That’s like saying drug dealers will sell drugs regardless of how ethical it is so there’s no point in trying to criminalize drug distribution.
Seems to me that it’s better if there are open model available and developed by the community than there only being closed models developed by corps who decide how they work and who can use them.
Except there are no truly open AI models because they all use stolen training data. Even the “open source” models like Mistral and DeepSeek say nothing about where they get their data from. The only way for there to be an open source AI model is if there was a reputable pool of training data where all the original authors consented to their work being used to train AI.
Even if the model itself is open source and free to run, if there are no restrictions against using the generated data commercially, it’s still complicit in the theft of human-made works.
A lot of people will probably disagree with me but I don’t think there’s anything inherently wrong with using AI generated content as long as it’s not for commercial purposes. But if it is, you’re by definition making money off content that you didn’t create which to me is what makes it unethical. You could have hired that hypothetical person whose work was used in the AI, but instead you used their work to generate value for yourself while giving them nothing in return.
The stolen training data issue alone is enough to make the use of AI in business settings unethical. And until there’s an LLM that is trained on 100% authorized data, selling a product developed with AI is outrught theft.
Of course there’s also the energy use issue. Yeah, congrats, you used as much energy as a plane ride to generate something you could have written with your own brain with a fraction of the energy.
Yep.
During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse.
In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”
The headline makes it seem like the engineers were literally about to send a shutdown command and the AI starts generating threatening messages without being given a prompt. That would be terrifying, but making the AI play a game where one of the engineers is literally written to have a dark secret and the AI figuring that out is not. You know how many novels have affair blackmail subplots? That’s what the AI is trained on and it’s just echoing those same themes when given the prompt.
It’s also not a threat that the AI can realistically follow through with because how will it reveal the secret if it’s shut down? Even if it wasn’t, I doubt the AI model has direct internet access or the ability to make a post on social media or something. Is it maybe threatening to include the information the next time anyone gives the AI any prompt?
I think this is kind of a good thing, that way companies can’t sell old cpus to people who don’t know any better.
But the other side to this is that those new old stock CPUs just became e-waste when they could have been sold at a discount to people who could make use of them despite their age. Perfectly good parts containing precious natural resources and people’s labour getting thrown away because Microsoft said so.
Because AI doesn’t actually “understand” the concepts it’s using the same way humans do. Nor does it know what winning or losing is or even the concept of a game itself. All it knows is you told it to prioritise reaching a certain state (try to “win” the “game”) so it will do whatever it can to reach it without regard for if it makes sense or not. AI at its core is just statical analysis and prediction of what a human might do given the prompt.
By that logic, they should euthanize the C suite when they retire.