• 9 Posts
  • 60 Comments
Joined 10M ago
cake
Cake day: Sep 13, 2024

help-circle
rss

By that logic, they should euthanize the C suite when they retire.


I mean, she’s literally a grave robber and thinks that’s an honourable career. So definitely believable.

There’s also a good chance she’s killed people trying to prevent their sacred tombs from being desecrated.


Destroying the sacred culture of Indigenous people her country genocided is more profitable than some lame dinosaurs.



Any open source system monitor apps?
Currently using htop on termux but would ideally like something similar to the system monitor on KDE, that can show a graph of individual core usage as well as memory usage. Does anything like that exist that's open source?
fedilink


Its in there. At least for phones without a high enough waterproof rating

Gonna guess all the phone manufacturers will coincidentally determine that waterproofing is a feature desperately needed by the European market on all tiers.

Also won’t affect iPhones since they’re already all “waterproof.”


Unknown Chinese Phonemaker

Shit title and really highlights Bloomberg’s western supremacist views. Unknown to who? Clearly tons of people know about it if it took over Africa.

They have different brands than us because they’re a completely different region and market? Nope, never heard of it so it must be unknown.


Mesh networks can be built on zero trust principles and have everything E2EE. Kind of like Tor.

But the more realistic scenario is the police will just deploy jammers to completely disable all wireless communication.




My new baseless theory: We know that AI is trained on tons of novels and fictional stories. Is it possible that because all novels have significant conflicts and drama, and stories where some person just boringly does his boring job forever aren’t exactly bestsellers, the AI is maybe trying to inject drama even when it makes no sense, since it’s been conditioned that way through the training data? So it’s seeing these inconsequential issues and since every novel it’s ever “read” turns them into massive conflicts, it’s trying to follow suit?


In the same way your fridge needs a web browser.

Though the point of this is probably not that it will be a viable product, but managing a vending machine is one of those seemingly easy and straightforward tasks that make good starting applications to test the AI with. Basically, if it can’t even handle something as simple as a vending machine, it definitely can’t be trusted with anything more complex.



Just say “pfft, you believe in the government?”



Warning Chinese Competition Is Closing In

You know everyone believes in capitalism when a new competitor in the free market is a dire warning and perceived as a threat.


Honestly, I’ve been doing some recreational thinking about this whole thing, and I find myself agreeing with you. You brought up good points I hadn’t considered, thanks!


You can’t put toothpaste back in the tube. The only question going forward is how AI will be developed and who will control it.

Fair enough, but even if the model is open source, you still have no control or knowledge of how it was developed or what biases it might have baked in. AI is by definition a black box, even to the people who made it, it can’t even be decompiled like a normal program.

It’s funny that you’d bring up the drug analogy because you’re advocating a war on drugs here.

I mean, China has the death penalty for drug distribution, which is supported by the majority of Chinese citizens. They do seem more tolerant of drug users compared to the US (I’ve never done drugs in China nor the US so I wouldn’t know), so clearly the decision to have zero tolerance for distributors is a very intentional action by the Communist party. As far as I know, no socialist country has ever been tolerant to even the distribution of cannabis, let alone hard drugs, and they have made it pretty clear that they never will.

Personally, I have absolutely no problem with that if the model is itself open and publicly owned. I’m a communist, I don’t support copyrights and IP laws in principle. The ethical objection to AI training on copyrighted material holds superficial validity, but only within capitalism’s warped logic. Intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists.

I never thought of it in terms of copyright infringement, but in terms of reaping the labour of proletarians while giving them nothing in return. I’m admittedly far less experienced of a communist than you, but I see AI as the ultimate means of removing workers from their means of production because it’s scraping all of humanity’s intellectual labour without consent, to create a product that is inferior to humans in every way except for how much you have to pay it, and it’s only getting the hype it’s getting because the bourgeoisie see it as a replacement for the very humans it exploited.

For the record, I give absolutely no shits about pirating movies or “stealing” content from any of the big companies, but I personally hold the hobby work of a single person in higher regard. It’s especially unfair to the smallest content creators because they are most likely making literally nothing from their work since the vast majority of personal projects are uploaded for free on the public internet. It’s therefore unjust (at least to me) to appropriate their free work into something whose literal purpose is to get companies out of paying people for content. Imagine working your whole life on open source projects only for no company to want to hire you because they’re using AI trained on your open source work to do what they would have paid you to do. Imagine writing novels your whole life and putting them online for free, only for no publisher to want to pay for your work because they have a million AI monkeys trained on your writing typing out random slop and essentially brute forcing a best seller. Open source models won’t prevent this from happening, in fact it will only make it easier.

AI sounds great in an already communist society, but in a capitalist one, it seems to me like it would be deadly to the working class, because capitalists have made it clear that they intend to use it to eliminate human workers.

Again, I don’t know nearly as much about communism as you so most of this is probably wrong, but I am expressing my opinions as is because I want you to examine them and call me out where I’m wrong.


[Linked article] M3 Ultra Runs DeepSeek R1 With 671 Billion Parameters Using 448GB Of Unified Memory, Delivering High Bandwidth Performance At Under 200W Power Consumption, With No Need For A Multi-GPU Setup

Running the AI is not where the power demand comes from, it’s training the AI. Which, if you trained it only once it wouldn’t be so bad, but obviously every AI vendor will be training all the time to ensure their model stays competitive. That’s when you get into the tragedy of the commons situation where the collective power consumption goes out of control for tiny improvements in the AI model.

Meanwhile, corps clearly don’t care about IP here and will keep developing this tech regardless of how ethical it is.

“It will happen anyway” is not an excuse to not try to stop it. That’s like saying drug dealers will sell drugs regardless of how ethical it is so there’s no point in trying to criminalize drug distribution.

Seems to me that it’s better if there are open model available and developed by the community than there only being closed models developed by corps who decide how they work and who can use them.

Except there are no truly open AI models because they all use stolen training data. Even the “open source” models like Mistral and DeepSeek say nothing about where they get their data from. The only way for there to be an open source AI model is if there was a reputable pool of training data where all the original authors consented to their work being used to train AI.

Even if the model itself is open source and free to run, if there are no restrictions against using the generated data commercially, it’s still complicit in the theft of human-made works.

A lot of people will probably disagree with me but I don’t think there’s anything inherently wrong with using AI generated content as long as it’s not for commercial purposes. But if it is, you’re by definition making money off content that you didn’t create which to me is what makes it unethical. You could have hired that hypothetical person whose work was used in the AI, but instead you used their work to generate value for yourself while giving them nothing in return.


The stolen training data issue alone is enough to make the use of AI in business settings unethical. And until there’s an LLM that is trained on 100% authorized data, selling a product developed with AI is outrught theft.

Of course there’s also the energy use issue. Yeah, congrats, you used as much energy as a plane ride to generate something you could have written with your own brain with a fraction of the energy.


I wish I could still use a fairphone as a daily driver in Canada. I have a Fairphone 4, had it shipped all the way from Europe and used it for two years before my network suddenly stopped connecting to it so I ended up getting a new phone. still use it at home though.


Remember when flip phones came with a second battery that you can swap in when the first one dies?
I remember, and I'm gen z. And some higher end laptops had two battery slots so you can hot swap the batteries without turning it off. Those were the days. Everyone talks about how smartphones nowadays get people addicted to instant gratification and convenience, but IMO the ability to swap out the battery when it died was a level of instant convenience we had decades ago that modern devices are severely lacking. Having to tether your phone to a battery bank while on the go is nowhere near as good as just popping the back cover and replacing the battery.
fedilink

Yep.

During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse.

In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

The headline makes it seem like the engineers were literally about to send a shutdown command and the AI starts generating threatening messages without being given a prompt. That would be terrifying, but making the AI play a game where one of the engineers is literally written to have a dark secret and the AI figuring that out is not. You know how many novels have affair blackmail subplots? That’s what the AI is trained on and it’s just echoing those same themes when given the prompt.

It’s also not a threat that the AI can realistically follow through with because how will it reveal the secret if it’s shut down? Even if it wasn’t, I doubt the AI model has direct internet access or the ability to make a post on social media or something. Is it maybe threatening to include the information the next time anyone gives the AI any prompt?



It’s better to get it fron F-Droid because you never know what kind of spyware Google Play injects. Repacking an APK is not that hard, especially if you control the cryptographic keys for signing them.


Couldn’t you get more energy density with compressed air? That way the entire volume of your warehouse is storing energy at the same time.



America going full sour grapes right now. “We weren’t the first to develop this technology so obviously the technology sucks and is not viable.”




Take your best on which will kill us first: the climate Apocalypse or the robot Apocalypse.




There’s literally no reason to buy a game until the minute before you’re going to play it. It’s not like digital copies sell out or takes time to ship. Add games you want to play to our wishlist and buy them when you’re actually ready to play them.



Microsoft: has town hall

Also Microsoft: “approved opinions only!”


I think this is kind of a good thing, that way companies can’t sell old cpus to people who don’t know any better.

But the other side to this is that those new old stock CPUs just became e-waste when they could have been sold at a discount to people who could make use of them despite their age. Perfectly good parts containing precious natural resources and people’s labour getting thrown away because Microsoft said so.


Microsoft: NOOO YOU CAN’T USE THAT CPU IT CAME OUT AN ARBITRARY AMOUNT OF TIME AGOOOO!

Linux: Haha potato chip go BRRRR


Honestly playing a competitive game with AI is kind of like playing with a child who hasn’t grown out of the making up random rules phase.

“Rock crushes scissors, I win!”

“Nuh uh! My scissors are actually a ray gun and disintegrated your rock!”


Because AI doesn’t actually “understand” the concepts it’s using the same way humans do. Nor does it know what winning or losing is or even the concept of a game itself. All it knows is you told it to prioritise reaching a certain state (try to “win” the “game”) so it will do whatever it can to reach it without regard for if it makes sense or not. AI at its core is just statical analysis and prediction of what a human might do given the prompt.



I know it’s impressive and all but I still get the heebee jeebees from a humanoid robot.

I want the robot uprising to look like HAL, not Terminator.


“We need quality human-made data to feed to our AI so please don’t give us slop we can’t use”


When an Android phone is providing a Wi-Fi hotspot, is it possible to make the local network addresses of connected devices visible to the phone?
If I connect my laptop to a Wi-Fi hotspot provided by the phone? Is there a way to let me SSH into the laptop via an IP address? Or vice versa, allow the laptop to SSH into the phone (I can already SSH into my phone on my local network by using an app that provides an SSH server on my phone) from the laptop while it's connected to the hotspot.
fedilink


🖕 Fuck PayPal And fuck Linus Tech Tips for intentionally keeping quiet about this after they found out.
fedilink

https://x.com/itchio/status/1866017758040993829
fedilink