• 1 Post
  • 119 Comments
Joined 3Y ago
cake
Cake day: Jan 17, 2022

help-circle
rss

The propaganda aspect is import so I’m adding this to a reply rather than yet another edit.

This research is interesting. What the article tries to do isn’t clarifying the work rather than put a nation “first”. Other nations do that too. That’s not a good thing. We should celebrate research as a better understanding of our world, both natural and engineered. We should share what has been learned and built on top of each other.

Now when a nation, being China, or the US, or any other country, is saying they are “first” and “ahead” of anybody else, it’s to bolster nationalistic pride. It’s not to educate citizens on the topic. It’s important to be able to disentangle the two regardless of the source.

That’s WHY I’m being so finicky about facts in here. It’s not that I care about the topic particularly, rather it’s about the overall political process, not the science.


Thanks for taking the time to clarify all that.

It’s not a typo because the paper itself does mention 3090 as a benchmark.

I do tinker with FPGAs at home, for the fun of if (I’m no expert but the fact that I own few already shows that I know more about the topic than most people who don’t even know what it is, or what it’s for) so I’m quite aware of what some of the benefits (and trade of) can be. It’s an interesting research path (again, otherwise I wouldn’t even have invested my own resources to learn more about that architecture in the first place) so I’m not criticizing that either.

What I’m calling BS on… is the title and the “popularization” (and propaganda, let’s be honest here) article. Qualifying a 5 years old chip as flagship (when, again, it never was) and implying what the title does, is wrong. It’s overblown otherwise interesting work. That being said, I’m not surprised, OP share this kind of things regularly, to the point that I ended up blocking him.

Edit: not sure if I really have to say so but the 4090, in March 2025, is NOT the NVIDIA flagship, that’s 1 generation behind. I’m not arguing for the quality of NVIDIA or AMD or whatever chip here. I’m again only trying to highlight the sensationalization of the article to make the title look more impressive.

Edit2: the 5090, in March 2025 again, is NOT even the flagship in this context anyway. That’s only for gamers… but here the article, again, is talking about “energy-efficient AI systems” and for that, NVIDIA has an entire array of products, from Jetson to GB200. So… sure the 3090 isn’t a “bad” card for a benchmark but in that context, it is no flagship.

PS: taking the occasion to highlight that I do wish OP to actually go to China, work and live there. If that’s their true belief and they can do so, to not solely “admire” a political system from the outside, from the perspective of not participating to it, but rather give up on their citizenship and do move to China.


Well, I honestly tried (cf history). You’re neither addressing my remark about the fact from the article nor the bigger picture. Waste of time, blocked.


Unfortunately my model isn’t supported. I might look for a 2nd hand supported one with the USB adapter and try, as I do use and work with Linux on a daily basis.


Based on https://old.reddit.com/r/mildlyinfuriating/comments/1jb2uvt/roomba_accidentally_saw_outside_and_now_i_cant/ I’d bet some models surely do.

That being said, I am NOT promoting Roomba or any other brand, I’m only highlighting that apps aren’t necessarily a requirement for the basic feature.

Finally, as others suggested if one genuinely does need such feature and is mindful about privacy, I’d check https://valetudo.cloud/ first then see what harder supports it, which sadly doesn’t seem to support Roomba or Roborock AFAICT. It does, lucky you, check https://valetudo.cloud/pages/general/supported-robots.html#roborock

Edit: apparently “Xiaomi V1 is made by Roborock” according to https://valetudo.cloud/pages/general/supported-robots.html so maybe there is way, worth investigating for you IMHO.


turns out you can use older GPUs in creative ways to get a lot more out of them than people realized

If that’s the point then that’s the entire GPU used for mining then ML revolution, thanks to CUDA mostly, that already happened in 2010 so that’s even older, that’d 15 yeas ago.

What I was highlighting anyway is that it’s hard to trust an article where simple facts are wrong.


What is this… “Nvidia’s flagship RTX 3090 GPU”? Are we in back in 2020? Half a decade ago? Is this a joke? Even then, it wasn’t the flagship, the 3090 Ti was.


Is there a Murena/Volla of vacuuming robots? Namely can one buy a working robot (new or reconditioned) with Valetudo pre-installed?


FWIW you can use a Roomba without an app. You… push the physical button on the robot, and voila. No app, no connection, still cleaning.

Sure you can’t schedule cleaning but honestly unless you have a version that can empty it’s own trash recipient and your house is always robot cleaning friendly (so… 0 cable on the way, chairs aside, etc) it’s rarely a huge efficiency gain.

Honestly I feel like 10y/o there was a lot of hype around vacuuming robot but it didn’t “explode” in popularity because it’s not really such a big difference.


Blocked, you’re just trying to be provocative instead of having a constructive discussion. I don’t have time to waste on that so don’t bother replying to my comments, I won’t see what you write anymore.


In case any parent is reading this and feels (somehow!) like “Oh no… my child will be left behind!” to the point of considering buying some BS humanoid or animaloid “pedagogical” robot, get yourself a (European designed) good “old” Lego set! They’ve been at it for decades (literally, since at least 1998 with Mindstorms) and they focus specifically on pedagogy at school with e.g. https://www.lego.com/en-us/product/lego-education-spike-essential-set-45345

Do NOT get a cheap piece of plastic that you do not understand, that behaves in “smart” ways you can’t explain and that passes data long you have no idea about!


“Timmy hugged his little robot friend before heading to bed. He doesn’t have a name for it – yet. “It’s like a little teacher or a little friend,” the boy said”

… it’s way WORST than a fail. How do you think this human will develop assuming friendship with a (commercial) product rather than another human being? My bet, but I’m no psychologist, is poorly.


That opening photo is so telling, a chess robot … while one could literally run https://lichess.org/ from ANY device (tablet, mobile phone, laptop, etc) and have a functionally equivalent experience for free (both open source and free of cost, no ad either), in fact arguably a much better one due 0 setup time (literally none, it’s all Web based!) to all the community, tutoring exercises, etc.

This is such a blatant fail.


IMHO that’s the linchpin, what’s the gap between what a leader (political or business) would claim to be true versus… what’s actually working, and beyond that, what’s actually useful then used in practice.

Working in innovation we called this the “marketing gap” and it’s quite a funnel, from broad claim that AI or any other emerging technology will “change everything” to what people, workers and consumers alike, actually use frequently and are wiling to pay for.

One needs bold claims, even if false, to get votes or funding money.


added AI to its products basically to receive government subsidies.

Damn, I opened this post bracing myself for BS comments praising AI slop but this was actually interesting, thanks for sharing! Do you have any references (in English ideally) where I could read about such trends there, not propaganda & tech marketing like that BBC piece?



Again… I didn’t even read the article but “[redacted to remove bias] University researchers have developed [better] than leading [whatever].” is definitely interesting yet also pointless. Of course research is important, even fundamental, to the production process… but it’s not a fair comparison because production, at scale, and economically reliable requires a LOT more constraints!

So the research, regardless of the source, is welcomed but comparing to production rather than comparing to other research labs pushing limits on the same dimensions is not useful.

PS: for my starting “Again” see my post history.

Edit : AFAICT “outperforms the most advanced commercial chips from […] Belgium’s Interuniversity Microelectronics Centre.” IMEC doesn’t do commercial chips, just research.



China is now making their own chips domestically that are only a generation or two behind the bleeding edge.

Maybe I’m missing something here, which chips are you talking about? Are you talking about something other than Kirin 9000S and if so which ones please?


I let you read the comments from their source since you didn’t actually bother reading mine.

Edit: people can check my Lemmy history on the topic, I ask the same thing here every few months. Anyway also the moment to suggest Chips War (even though, as always, outdated) as a good book IMHO on the geopolitics of chips manufacturing.


Feels like we have news like that every quarter but not a lot of actual change. Does any foundry outside of China, e.g. TSMC, buying or even getting any partnership to test them? Without subsidies? What’s the yield relative to alternatives?

It does beg for a DeepSeek moment for hardware, namely actual competition stemmed from necessity, but again so far that race has been a lot of claims.


On a broader and more philosophical perspective, cheating or IMHO more appropriately hacking, is in the eye of the beholder.

Is it really cheating if you respect all the rules? Aren’t the rules actually poorly defined in the first place?

What matters more I’d argue is the social contract, namely is what you are doing detrimental to yourself and or others. For example I lock picked a door just months ago, and it wasn’t my door, and I’m not even a certified locksmith! Well, it’s because my neighbors asked me to as their key was jammed from the other side. So… at least according to them, who owns the house, it was helpful.

My overall point is that this is quite sensationalist, as most of AI “reporting” is (I put quotes around because truly it’s just marketing or PR for AI corporations at this point) it actually is an expected behavior.

PS: reminds me of this streamers few months ago (sorry, no link) who was “shocked” that it’s local AI exited its container to “hack” his computer. Well, lo and behold when you check his actual prompt, he does explicitly request the AI to do so.


Always has been… there is no reasoning, it’s literally just spitting back the most likely answer based on previously seen answers. A 5 years old can do better.

Edit: “AI systems may develop deceptive or manipulative strategies without explicit instruction.” … right, well, guess what, the Web (which is most likely the training dataset for most LLMs) is full of “cheating” strategies. Don’t be surprise if you find a “creative” answer to a problem… when it’s literally part of what you train the model on.


redundant

redundant AND prone to failure! What if the robot slip? What if it’s out of battery. Totally non sensical!



Impossible to read article. All videos get blocked (privacy reasons, even while “accept all” the 1st video is private…) so I recommend checking instead the article to the research in the description as they host their own videos.


Right… well it’s about “organizing the World Knowledge” … but if for that one has to do literally anything in order to accumulate more wealth, that takes priority.

I’m genuinely concern for anybody who would still have a modicum of trust with corporations the size of Google.

Of course they’ll do anything to increase “shareholder value”, legal or not, moral or not. That’s the entire point of a corporation driven by the stock market.


Distinct point but if I was at Meta, or Microsoft, and would want to get more resources, I’d point at the challenge (without saying “panic”) of competition, e.g. China, in order to get more GPUs, data centers built, R&D subsidies, anything that make competition look fierce regardless of what I actually, so in that sense, it’s a very useful piece for them.


I’d be quite curious to know the number of people who see AI as a standalone product. My bet would be very very few. Consequently when Meta provides it as an additional service to what they already offer, via chatbots or generated images or suggestions within post, they shortcut pure players. When they provide that additional service for fee, they undercut them. So… I’m not saying Meta won’t see slightly less usage for their own AI services but actual products, e.g. WhatsApp, Instagram, etc then I doubt it. IMHO it’s a sensationalist title.


Why would Meta AI be in “panic mode” when they provide the “service” for free anyway?

OpenAI though, or Anthropic, and others who are “pure player” in AI and do charge for a service might be in a pinch … BUT even then it requires a lot of resources that the random computer user do not have (e.g. a GPU and a large disk), so that even in such case (sadly, as IMHO self-hosted open-source AI is much saner in most cases, cf my https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence live wiki page) the average consumer would still better pay for a model to run.


Indeed, professionals are expensive, and IMHO rightfully so. I’m only trying to highlight the fact that it’s unfair to imply it’s much cheaper when the quality isn’t on par.


Not a weird example. I have my self hosted video server (PeerTube) and I tinkered with transcription thanks to whisper.cpp locally. It “works” in the sense that most of it is acceptable. It still does mistake though. I provide all my content, including hosting, at my costs and to anyone in the World for free.

So… I definitely see the value. I’m only saying that it has downsides and quality-wise relative to professional, it’s still bad.


Don’t let them know about cp, we’re all doomed! DOOMED! /s

PS: sensationalist BS. Interesting as a philosophical concept for sure but here it’s just fear marketing by AI-bro for their VC-funded scam.


Arguable… it’s OKish at best, definitely nowhere near as good as professional… then IMHO it’s like spotting a spelling mistake in an official document, you instantly look for MORE mistakes then it become distracting. There is something powerful about trust that once it’s broken, it’s hard to get back. Once a spelling or here transcription mistake happens, then we brace for more (rationally so) and it becomes a very taxing endeavor.

So… sure STT progressed quite a bit but it’s STILL not good enough in a lot of cases.

Case in point, IMHO when there is a choice, most people (everybody?) would rather have human made captions than AI ones.



Some apps are still done this way, e.g. transmission the BitTorrent client, but also ALL self-hosted Web apps. Sure it might feel a bit much to install containers on your phone “just” for that, or having to go through REST API despite being on the same actual device, but still it provides a TON of app.

Anyway, yes I agree that it is often a better model. Still a lot of apps, e.g. Blender, Inkscape, etc do provide a CLI interface. So one can both use them with a GUI or without. It’s not decoupled like transmission but arguably it covers most needs.


The “struggle” is because Apple and Google refuse to do so as they built the platform to give themselves priority.

One can trivially do so on a Linux phone, e.g. PinePhone with PostMarketOS.

Source: I did it. Plenty of others do through the usual ways, e.g. pipe in the console but also with things like https://sxmo.org/docs/user/sxmo.7.html#HOOKS


Cool tech but I don’t get it. Who cares for ghostly 3D re-creation of moments? It demands so much more than snapping a 2D photo for a result that can be qualified “strange” at best.

I find XR much more interesting for things that are otherwise impossible, say traveling through the solar system or the human body, playing a rhythm game while punching things in the air, etc.

This is totally overblown. They are not “worlds”, it’s usually 10x10m spaces at most, nor photorealistic, so much is left out, there is not animation, no physics, etc.

PS: FWIW I try some in XR, and I also did some photogrametry few years ago. Again it’s an interesting process but it demands a lot for a result that few non-tech people would genuinely be impressed with to the point of replacing their holidays photos with.


Which… is “funny” because even though it is a genuine arm race where 2 powerful nations are competing… it’s a pointless one.

Sure, we do get slightly better STT, TTS, some “generation” of “stuff”, as in human sounding text use for spam and scam, images and now videos without attribution, but the actual hard stuff? Not a lot of real change there.

Anyway, interesting to see how the chips war unfold. For now despite the grand claim though from both :

  • US with software and models for AI (Claude, OpenAI, etc driven by VC backed funding looking for THE next big thing, which does NOT materialize) ) and hardware, mostly NVIDIA (so happy to sell shovels for the current gold rush) or
  • China with “cheap” to train large models (DeepSeek) and hardware (SMIC, RISC based chips) to “catch-up” without any large production batch with any comparable yield

neither have produced anything genuinely positive IMHO.


in any way shape or form

I’d normally accept the challenge if you didn’t add that. You did though and it, namely a system (arguably intelligent) made an image, several images in fact. The fact that we dislike or like the aesthetics of it or that the way it was done (without prompt) is different than how it currently is remains irrelevant according to your own criteria, which is none. Anyway my point with AARON isn’t about this piece of work specifically, rather that there is prior work, and this one is JUST an example. Consequently the starting point is wrong.

Anyway… even if you did question this, I argued for more, showing that I did try numerous (more than 50) models, including very current ones. It even makes me curious if you, who is arguing for the capabilities and their progress, if you tried more models than I did and if so where can I read about it and what you learned about such attempts.


"Venture capital finance has dried up amid political and economic pressures, prompting a dramatic fall in new company formation" Posted in technology as most of the funded companies are into technology. The most shocking piece is arguably the number of funded company pear year with a clear peak in 2018 which is 50x (!) more than last year, 2023.
fedilink