For PC gaming news and discussion.
PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let’s Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates.
(Submissions should be from the original source if possible, unless from paywalled or non-english sources.
If the title is clickbait or lacks context you may lightly edit the title.)
- 1 user online
- 70 users / day
- 322 users / week
- 854 users / month
- 3.13K users / 6 months
- 1 subscriber
- 4.48K Posts
- 28.8K Comments
- Modlog
Predictable outcome, common tech company L.
Unless you’re doing music or graphics design there’s no usecase. And if you do, you probably have high end GPU anyway
I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I’d rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.
There are local models for text gen - not as good as chatGPT but at the same time they’re uncensored - so it may or may not be useful
Yes, I know - that’s my point. But you need the necessary hardware to run those models in a performative way. Waiting a minute to produce some vaguely relevant gibberish is not going to be of much use. You could also use generative text for other applications, such as video game NPCs, especially all those otherwise useless drones you see in a lot of open world titles could gain a lot of depth.
The dedicated TPM chip is already being used for side-channel attacks. A new processor running arbitrary code would be a black hat’s wet dream.
Do you have an article on that handy? I like reading about side channel and timing attacks.
TPM-FAIL from 2019. It affects Intel fTPM and some dedicated TPM chips: link
The latest (at the moment) UEFI vulnerability, UEFIcanhazbufferoverflow is also related to, but not directly caused by, TPM on Intel systems: link
That’s insane. How can they be doing security hardware and leave a timing attack in there?
Thank you for those links, really interesting stuff.
It’s not a full CPU. It’s more limited than GPU.
That’s why I wrote “processor” and not CPU.
A processor that isn’t Turing complete isn’t a security problem like the TPM you referenced. A TPM includes a CPU. If a processor is Turing complete it’s called a CPU.
Is it Turing complete? I don’t know. I haven’t seen block diagrams that show the computational units have their own cpu.
CPUs also have co processer to speed up floating point operations. That doesn’t necessarily make it a security problem.
It will be.
IoT devices are already getting owned at staggering rates. Adding a learning model that currently cannot be secured is absolutely going to happen, and going to cause a whole new large batch of breaches.
The “s” in IoT stands for “security”
The other 16% do not know what AI is or try to sell it. A combination of both is possible. And likely.
Tbh this is probably for things like DLSS, captions, etc. Not necessarily for chatbots or generative art.
I’d pay extra for no AI in any of my shit.
I would already like to buy a 4k TV that isn’t smart and have yet to find it. Please don’t add AI into the mix as well :(
I was just thinking the other day how I’d love to “root” my TV like I used to root my phones. Maybe install some free OS instead
You can if you have a pre-2022 LG TV. It’s more akin to jailbreaking since you can’t install a custom OS, but it does give you more control.
https://rootmy.tv
All TVs are dumb TVs if they have no internet access
I don’t have a TV, but doesn’t a smart TV require internet access? Why not just… not give it internet access? Or do they come with their own mobile data plans now meaning you can’t even turn off the internet access?
Anti Commercial-AI license
They continually try to get ob the Internet, it’s basically malware at this point. The on board SoC is also usually comically underpowered so the menus stutter.
I never needed a TV, but now I for sure am not getting one.
Anti Commercial-AI license
IDK why people are downvoting you, I am sure you’re not alone with that sentiment.
A lot of TVs are requiring an account login before being able to use it.
OK, that’s really fucked. What the hell? Wait a moment… that means they could turn the use of the TV into a subscription at any time! That’s crazy…
Anti Commercial-AI license
I just disconnected my smart TV from the internet. Nice and dumb.
Still slow UI.
If only signage displays would have the fidelity of a regular OLED consumer without the business-usage tax on top.
What do you use the UI for? I just turn my TV on and off. No user interface needed. Only a power button on the remote.
Even switching to other stuff right after the boot (because the power-on can’t be called a simple power-on anymore) the tv is slow.
I recently had the pleasure of interacting with a TV from ~2017 or 2018. God was it slow. Especially loading native apps (Samsung 50"-ish TV)
I like my chromecast. At least that was properly specced. Now if only HDMI and CEC would work like I’d like to :|
Look into commercial displays
The simple trick to turn a “smart” TV into a regular one is too cut off its internet access.
Except it will still run like shit and may send telemetry via other means to your neighbors same brand TV
I’ve never heard of that. Do you have a source on that? And how would it run like shit if you’re using something like a Chromecast?
I don’t know about the telemetry, but my smart tv runs like shit after being on for a few hours. Only a full power cycle makes it work properly again.
https://www.amazon.com/Amazon-Sidewalk/b?ie=UTF8&node=21328123011
https://aws.amazon.com/iot-core/features/
Mine still takes several seconds to boot android TV just so it can display the HDMI input, even if not connected to internet. It has to be always plugged on the power because if there is a power cut, it needs to boot android TV again.
My old dumb TV did that in a second without booting an entire OS. Next time I need a big screen, it will be a computer monitor.
I got a roku tv and i don’t even know what that means cuz my tele will never see the outside world
Still uses the shitty ‘smart’ operating system to handle inputs and settings.
I just bought a commercial display directly from the Bengal stadium. Still has Wi-Fi.
Signage TVs are good for this. They’re designed to run 24/7 in store windows displaying advertisements or animated menus, so they’re a bit pricey, and don’t expect any fancy features like HDR, but they’ve got no smarts whatsoever. What they do have is a slot you can shove your own smart gadget into with a connector that breaks oug power, HDMI etc. which someone has made a Raspberry Pi Compute Module carrier board for, so if you’re into, say, Jellyfin, you can make it smart completely under your own control with e.g. libreELEC. Here’s a video from Jeff Geerling going into more detail: https://youtu.be/-epPf7D8oMk
Alternatively, if you want HDR and high refresh rates, you’re okay with a smallish TV, and you’re really willing to splash out, ASUS ROG makes 48" 4K 10-bit gaming monitors for around $1700 US. HDMI is HDMI, you can plug whatever you want into there.
We got a Sceptre brand TV from Walmart a few years ago that does the trick. 4k, 50 inch, no smart features.
I’m sure that’s coming up.
As a yearly fee for DRMd televisions that require Internet access to work at all maybe
Right now it’s easier to find projectors without it and a smart os. Before long tho it’s gonna be harder to find those without a smart os and AI upscaling
Show the actual use case in a convincing way and people will line up around the block. Generating some funny pictures or making generic suggestions about your calendar won’t cut it.
I completely agree. There are some killer AI apps, but why should AI run on my OS? Recall is a complete disaster of a product and I hope it doesn’t see the light of day, but I’ve no doubt that there’s a place for AI on the PC.
Whatever application there is in AI at the OS level, it needs to be a trustless system that the user has complete control of. I’d be all for an Open source AI running at that level, but Microsoft is not going to do that because they want to ensure that they control your OS data.
Machine learning in the os is a great value add for medium to large companies as it will allow them to track real productivity of office workers and easily replace them. Say goodbye to middle management.
I think it could definitely automate some roles where you aren’t necessarily thinking and all decisions are made based on information internally available to the PC. For sure these exist but some decisions need human input, I’m not sure how they automate out those roles just because they see stuff happening on the PC every day.
If anything I think this feature is used to spy on users at work and see when keystrokes fall below a certain level each day, but I’m sure that’s already possible for companies to do (but they just don’t).
I would pay extra to be able to run open LLM’s locally on Linux. I wouldn’t pay for Microsoft’s Copilot stuff that’s shoehorned into every interface imaginable while also causing privacy and security issues. The context matters.
That’s why NPU’s are actually a good thing. The ability to run LLM local instead of sending everything to Microsoft/Open AI for data mining will be great.
I hate to be that guy, but do you REALLY think that on-device AI is going to prevent all your shit being sent to anyone who wants it, in the form of “diagnostic data” or “usage telemetry” or whatever weasel-worded bullshit in the terms of service?’
They’ll just send the results for “quality assurance” instead of doing the math themselves and save a bundle on server hosting.
Yes, obviously, especially if you are running all open source software.
All your unattended date will be taken (and some of the attended one). This doesn’t mean you should stop to attend your data. Even of you’re somehow forced to use Windows instead open alternative, it doesn’t mean you can’t dual boot or use other privacy conscious devices when dealing with your sensitive data.
Closed/proprietary OS and hardware driver can’t be considered safe by design)
I replied to the person above “locally on Linux”.
Even in Windows, local queries give the possibility of control. Set your firewall and it cannot leak.
And the other 16% would also pay you $230 to hit them in the face with a shovel
Poll shows 84% of PC users are suckers.
You like having to pay more for AI?
I feel like the sarcasm was pretty obvious in that comment, but maybe I’m missing something.
I would pay for AI-enhanced hardware…but I haven’t yet seen anything that AI is enhancing, just an emerging product being tacked on to everything they can for an added premium.
Already had that Google thingy for years now. The USB/nvme device for image recognition. Can’t remember what it’s called now. Cost like $30.
Edit: Google coral TPU
I use it heavily at work nowadays. It would be nice to run it locally.
I’m curious what you use it for at work.
Not the guy you were asking but it’s great for writing powershell scripts
I’m a programmer so when learning a new framework or library I use it as an interactive docs that allows follow up questions.
I also use it to generate things like regex and SQL queries.
It’s also really good at refactoring code and other repetitive tasks like that
it does seem like a good translator for the less human readable stuff like regex and such. I’ve dabbled with it a bit but I’m a technical artist and haven’t found much use for it in the things I do.
You don’t need AI enhanced hardware for that, just normal ass hardware and you run AI software on it.
But you can run more complex networks faster. Which is what I want.
Maybe I’m just not understanding what AI-enabled hardware is even supposed to mean
It’s hardware specifically designed for running AI tasks. Like neural networks.
https://github.com/huggingface/candle
You can look into this, however it’s not what this discussion is about
Exactly what we are talking about.
Stick to the discussion of paying a premium for hardware not the software
Not sure what you mean? The hardware runs the software tasks more efficiently.
The discussion is whether people should/would pay extra for hardware designed around ai vs just getting better hardware
Anything AI actually enhanced would be advertising the enhancement not the AI part.
DLSS and XeSS (XMX) are AI and they’re noticably better than non-hardware accelerated alternatives.
In the 2010s, it was cramming a phone app and wifi into things to try to justify the higher price, while also spying on users in new ways. The device may even a screen for basically no reason.
In the 2020s, those same useless features now with a bit of software with a flashy name that removes even more control from the user, and allows the manufacturer to spy on even further the user.
My Samsung A71 has had devil AI since day one. You know that feature where you can mostly use fingerprint unlock but then once a day or so it ask for the actual passcode for added security. My A71 AI has 100% success rate of picking the most inconvenient time to ask for the passcode instead of letting me do my thing.
It’s like rgb all over again.
At least rgb didn’t make a giant stock market bubble…
The biggest surprise here is that as many as 16% are willing to pay more…
Acktually it’s 7% that would pay, with the remainder ‘unsure’
I mean, if framegen and supersampling solutions become so good on those chips that regular versions can’t compare I guess I would get the AI version. I wouldn’t pay extra compared to current pricing though
And when traditional AI programs can be run on much lower end hardware with the same speed and quality, those chips will have no use. (Spoiler alert, it’s happening right now.)
Corporations, for some reason, can’t fathom why people wouldn’t want to pay hundreds of dollars more just for a chip that can run AI models they won’t need most of the time.
If I want to use an AI model, I will, but if you keep developing shitty features that nobody wants using it, just because “AI = new & innovative,” then I have no incentive to use it. LLMs are useful to me sometimes, but an LLM that tries to summarize the activity on my computer isn’t that useful to me, so I’m not going to pay extra for a chip that I won’t even use for that purpose.
You borked your link
Whoops, no clue how that happened, fixed!
Here is the article.
That still needs an FPGA. While they certainly seems to be able to use smaller ones, adding an FPGA chip will still add cost
I’m willing to pay extra for software that isn’t
Okay, but here me out. What if the OS got way worse, and then I told you that paying me for the AI feature would restore it to a near-baseline level of original performance? What then, eh?
One word. Linux.
I already moved to Linux. Windows is basically doing this already.