As for useful implementation, my cousin is an orthopedic surgeon and they use VR headset and 3D x-ray scanner, 3d printers and a whole bunch of sci-fi stuff to prep for operation, but they are not using a meta quest2, we’re talking 50k headset and million dollar equipment. None of that does anything to the gaming market.
That’s really awesome and I love seeing that the tech is actually seeing good uses.
Yeah. A lot of what you’re saying parallels my thoughts. The PC and console gaming market didn’t exist until there were more practical, non-specialty uses for computing and, importantly, affordability. To me, it seems that the manufacturers are trying to skip that and just try to get to the lucrative software part, while also skipping the part where you pay people fair wages to develop (the games industry is super exploitative of devs) or, like The Company Formerly-known as Facebook, use VR devices as another tool to harvest personal information for profit (head tracking data can be used to identify people, similar to gait analysis), rather than having interest in actually developing VR long-term.
Much as I’m not a fan of Apple or the departed sociopath that headed it, a similar company to its early years is probably what’s needed; people willing to actually take on some risk for the long-haul to develop the hardware and base software to make a practical “personal computer” of VR.
When I can code in one 10 hours a day without fucking up my eyes, vomiting myself, sweating like a pig and getting neck strain it will have the possibility to take over the computer market, until then, it’s a gimmick.
Absolutely agreed. Though, I’d note that there is tech available for this use case. I’ve been using Xreal Airs for several years now as a full monitor replacement (Viture is more FOSS friendly at this time). Bird bath optics are superior for productivity uses, compared to waveguides and lensed optics used in VR. In order to have readable text that doesn’t strain the eyes, higher pixels-per-degree are needed, not higher FOV.
The isolation of VR is also a negative in many cases as interacting and being aware of the real world is frequently necessary in productivity uses (both for interacting with people and mitigating eye strain). Apple was ALMOST there with their Vision Pro but tried to be clever, rather than practical. They should not have bothered with the camera and just let the real world in, unfiltered.
I think that the biggest problem is the lack of investment and willingness to take on risk. Every company just seems to want a quick cash grab “killer app” but doesn’t want to sink in the years of development of practical things that aren’t as flashy but solve real-world problems. Because that’s hard and isn’t likely to make the line go up every quarter.
I agree with you, to an extent. I would say it’s a lot more complicated than that with World of Warcraft, which is an MMO, and does not revolve on gambling except in the aspect of random number generated loot.
The way that the drops are is literally the same approach as a slot machine but with more steps to take up your time with boring shit and require more of your life to be dedicated to it so that there is less risk of you getting distracted by things like hobbies or games with finite stories with quality writing. A one-armed bandit might snag a handful of whales that spend all of their time feeding the machine. The Wrath of the Lich Bandit gets a much larger percentage of its users in front of it for a larger amount of their time, increasing the ratio of addicts/whales caught. Add in expansions, real money auctions, etc and you’ve got something much more fucked up than anything on a Vegas casino floor.
I think a big problem is (as of the last time I checked) the complete lack of anyone making practical things for VR. Not saying that everything needs to be practical to justify its existence but, I think that VR companies have been continually trying to skip ahead to the equivalent of where computing is now, ignoring the history of computers being primarily targeted to research and practical applications before they were adopted en masse and provided a lucrative market. So, instead, they just keep making glorified tech demos, hoping that someone else will do the hard work and they can rake in easy money by forcing them through app stores.
TL;DR: I think that short-sighted, profit-driven decision making is the reason that VR isn’t yet anything more than a niche.
Disclosure: I don’t play CoD anymore (I also think the series is overrated) and would like to see Activision/Blizzard burn.
You are, unfortunately, partially misperceiving and/or mischaracterizing the game and genre. Most are not murder simulators. Some certainly are (ex. Hitman and the skippable single player bits of one of the CoD games is) but those are the minority - the plots are generally revolving around military conflicts (whether military conflicts are by definition murder or not is another thing altogether though I would personally say that they are in the same ethical place) and the multiplayer is basically technological sports. Since the early-2000s at least, they have been propaganda supporting imperialism and normalizing military conflict, though GenZ seems to have wised up on that.
For the “real world guns” thing, they aren’t anymore with limited exceptions where a firearms company explicitly partners with them.
Additionally, the correlation between individuals playing violent video games and taking part on violence just does not exist in any research that has been conducted. Violent video games, in fact, allow people to work out aggression and frustration in healthy, non-destructive ways. Your anger is pointed in the wrong direction. If you want to target something that will have an actual impact, dedicate some energy to pushing fixes for wealth inequality and poverty. Yes, that’s harder to pin down but most things worth doing aren’t easy.
Shared libraries/dynamically-linked libraries, along with faster storage solve a lot of the historical optimization issues. Modern compilers and OSes general take care of that, if the right flags are used. With very few AAA games using in-house engines, it’s even less work for the studio, supposing the game engine developers are doing their jobs.
That said, you do still have a bit of a point. Proper QA requires running the software on all supported platforms, so, there is a need for additional hardware, if not offloading QA to customers via “Early Access”. Adding to that, there are new CPU architectures in the wild (or soon to be) that weren’t there 5 years ago and may not yet be well-supported with the toolchains.
Gaben is absolutely correct on practice though, it’s a distribution problem. EA, Epic, and the rest trying to force their storefront launchers and invasive DRM that makes the experience worse for the end users drives people to pirate more.
An opportunity RISC-V will offer to anyone with a billion dollars lying around.
Exactly this. Nvidia and Seagate, among others, have already hopped on this. I hold out hope for more accessible custom processors that would enable hobbyists and smaller companies to join in as well, and make established companies more inclined to try novel designs.
x86 market share is 99.999% driven by published software. Microsoft already tried expanding Windows, and being Microsoft, made half a dozen of the worst decisions simultaneously.
Indeed. I’ve read opinions that that was historically also a significant factor in PowerPC’s failure - noone is going to want to use your architecture, if there is no software for it. I’m still rather left scratching my head at a lot of MS’s decisions on their OS and device support. IIRC, they may finally be having an approach to drivers that’s more similar to Linux, but, without being a bit more open with their APIs, I’m not sure how that will work.
Linux dorks (hi)
Hello! 0/
What’s really going to threaten x86 are user-mode emulators like box86, fex-emu, and qemu-user. That witchcraft turns Windows/x86 binaries into something like Java: it will run poorly, but it will run.
Hrm…I wonder if there’s some middle ground or synergy to be had with the kind of witchcraft that Apple is doing with their Rosetta translation layer (though, I think that also has hardware components).
Right now those projects mostly target ARM, obviously. But there’s no reason they have to. Just melting things down to LLVM or Mono would let any native back-end run up-to-date software on esoteric hardware.
That would be brilliant.
I would’ve had my doubts, until Apple somehow made ARM competitive with x86. A trick they couldn’t pull off with PowerPC.
Yeah. From what I’ve pieced together, Apple’s dropping PowerPC ultimately came down to perf/watt and delays in delivery from IBM of a suitable chip that could be used in a laptop and support 64-bit instructions. x86 beat them to the punch and was MUCH more suitable for laptops.
Interestingly, the mix of a desire for greater vertical integration and chasing perf/watt is likely why they went ARM. With their license, they have a huge amount of flexibility and are able to significantly customize the designs from ARM, letting them optimize in ways that Intel and AMD just wouldn’t allow.
I guess linear speed barely ought to matter, these days, since parallelism is an order-of-magnitude improvement, and scales.
It is definitely a complicated picture, when figuring out performance. Lots of potential factors come together to make the whole picture. You’ve got ops power clock cycle per core, physical size of a core (RISC generally has fewer transistors per core, making them smaller and more even), integrated memory, on-die co-processors, etc. The more that the angry little pixies can do in a smaller area, the less heat is generated and the faster they can reach their destinations.
ARM, being a mature, and customizable RISC arch really should be able to chomp into x86 market share. RISC-V, while younger, has been and to grow an advance at a pace not seen before, to my knowledge, thanks to its open nature. More companies are able to experiment and try novel architectures than under x86 or ARM. The ISA is what’s gotten me excited again about hardware and learning how it’s made.
Initial market, absolutely. It’s already there at this point. Low power 32-bit ARM SoC MCUs have largely replaced the 8-bit and 16-bit AVR MCUs, as well as MIPS in new designs. They’ve just been priced so well for performance and relative cost savings on the software/firmware dev side (ex. Rust can run with its std library on Espressif chips, making development much quicker and easier).
With ARM licensing looking less and less tenable, more companies are also moving to RISC-V from it, especially if they have in-house chip architects. So, I also suspect that it will supplant ARM in such use cases - we’re already seeing such in hobbyist-oriented boards, including some that use a RISC-V processor as an ultra-low-power co-processor for beefier ARM multi-core SoCs.
That said, unless there’s government intervention to kill RISC-V, under the guise of chip-war (but really likely because of ARM “campaign contributions”), I suspect that we’ll have desktop-class machines sooner than later (before the end of the decade).
Sorry, but this is simply incorrect. Do you know what Eliza is and how it works? It is categorically different from LLMs.
I did not mean to come across as stating that they were the same, nor that the results produced would be as good. Merely, that a PDF could be run through OCR and processed into a script for ELIZA, which could produce some results to requests for a summary (ex. provide the abstract).
My point being that these technologies that are fundamentally different and at very different levels of technological sophistication can both, at a high level, accomplish the task. Both the quality of the result and capabilities beyond the surface level are very different. However, both, would be able to produce one, working within their architectural constraints.
Looking at it this way also gives a good basis for comparing LLMs to intelligence. Both, at a high level, can accomplish many of the same tasks, but, context matters in more than a syntactical sense and LLMs lack the capability of understanding and comprehension of the data that they are processing.
This is also incorrect.
That paper is both solely phenomenological and states that it is not using an accepted definition of intelligence. With the former point, there’s a significant risk of fallacy in such observation as it is based upon subjective observation of behavior not emperical analysis of why the behavior is occuring. For example leatherette may approximate the appearance and texture of leather but, when examined it differs fundamentally both on the macroscopic and microscopic level, making it objectively incorrect to call it “leather”.
I think the issue that many people have is that they hear “AI” and think “superintelligence”. What we have right now is indeed AI. It’s a primitive AI and certainly no superintelligence, but it’s AI nonetheless.
Here, we’re really getting into semantics. As the authors of that paper noted, they are not using a definition that is widely accepted, academically. Though they do definitely have a good point on some of the definitions being far too anthropocentric (ex. “being able to do anything that a human can do” - really, that’s a shit definition). I would certainly agree with the term “primitive AI”, if used akin to programming primitives (int, char, float, etc.) as it is clear that LLMs may be useful components in building actual general intelligence.
Its crazy how optimized natural life is and we have a lot left to learn.
It’s a fun balance of both excellent and terrible optimization. The higher amount of noise is a feature and may be a significant part of what shapes our personalities and ability to create novel things. We can do things with our meat-computers that are really hard to approximate in machines, despite having much slower and lossier interconnects (not to mention much less reliable memory and sensory systems).
Give Eliza equivalent compute time and functionality to interpret the data type and it probably could get something approaching a result. Modern LLMs really benefit from massive amounts of compute availability and being able to “pre-compile” via training.
They’re not, in and of themselves, intelligent. That’s not something that is seriously debated academically, though the dangers of humans misperceiving them as such very much is. They may be a component of actual artificial intelligence in the future and are amazing tools that I’m getting done hands-on time with, but the widespread labeling them as “AI” is pure marketing.
The video games industry definitely comes with a lot of stress, but they rely on passion to get value out of those long hours.
That’s called exploitation, plain and simple. It’s predatory behavior. They are knowingly under-compensating and over-working people, knowing that they can get away with it because of this passion. Say the same about just about any other industry and it’s clear how unacceptable it is. Beyond that, stress, objectively, causes unnecessary illness and death, as proven in decades worth of scientific studies.
This sounds like a situation of completely awful management, which won’t be fixed with a union (at least not immediately), since a bad manager can make life suck even if you have decent benefits, reasonable work hours, etc.
Bad management is literally one of the foundational reasons that unions exist in the first place. Management and capital have a significant power imbalance with workers and have, historically and currently, attempted to establish workplace environments and situations that are more exploitative. Collective bargaining is necessary to even the odds and allow for workers to air grievances and get them resolved, without punitive action.
Generally, “exclusive” in this context is referring to exclusivity on a console involved in the (IMO completely unnecessary) console wars.
I do agree that PC is an important item there too but, the problems there are a bit different - for example shoddy ports (no justification for porting from x86/amd64 consoles to PC to be bad), excessive and intrusive DRM, and unreasonable delay or unwillingness to port.
I thought that they tried too hard to align with Quake 3’s twitch-gaming.