This sounds a lot like what Supreme Commander: Forged Alliance has been doing for ages, and that expansion came out in 2007. The only difference is the unit limit but that’s mostly for performance reasons (and is rarely hit in competitive matches anyway).
How are these mechanics next-gen if they’re more than 15 years old?
But the barrier to entry for publishing a game on Steam is super-low, it’s honestly dead simple. And even though Steam takes a sizeable cut, they do tons of work in exchange w.r.t. promotion, distribution, community management, the modding workshop, Steam Input, testing Steam Deck compatibility, etc…
For indies it’s one of the easiest routes to publish a game. And given the relative success of indies on Steam, it seems to work quite well.
You can download them manually if you want. Updated drivers is rarely that important for performance. Maybe for newer games, but not for 98% of what’s already out there.
And they also mess things up occasionally. Like all those Minecraft performance mods that had to change how the game looked to the driver, because if it looked like Minecraft it’d tune itself and get worse performance instead of better.
It was a little trickier than I remember, they actively promoted illegal ways to obtain the keys, provided the tools to illegally bypass the DRM with them and (and this is what likely caught Nintendo’s attention) they were very actively monetizing it. This was enough to get Yuzu branded as an illegal tool sold to do piracy with.
Ryujinx was far more nebulous as few details were leaked, it seems there Nintendo just swung it’s big legalese dick around. Probably helped by the Yuzu settlement.
If producing an AGI is intractable, why does the human meat-brain exist?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.
There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.
And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.
And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?
This is a gross misrepresentation of the study.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.
That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.
Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.
Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.
Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.
Sure, but even Epic exclusives aren’t any cheaper than the games on Steam. These savings directly go to the game developer/publisher, not the consumer. This means there’s no incentive for the consumer to switch to Epic other than exclusive games, which is a pretty poor reason to switch away from a well-established platform.
I don’t think we are, at all.
I mostly say it because SMS is so ancient. Not encrypted, messages are storied by the carrier and can be requested by the government, etc… In that sense, even a corporate-controlled messaging system that offers E2EE would be a step up. After all, SMS is pretty corporate-controlled too, just different ones. But again, this is very much a European perspective, I can see why in the US this might be different.
iMessage is by far and away the most popular chat platform here, and is largely responsible for Apple’s local dominance in the smartphone market.
Ah true, iPhones are much more popular in the US. Quite interesting actually how that happened, iPhones aren’t all that popular here at all and Android phones dominate the market. I wonder why Apple hasn’t managed to copy their dominance here as well?
I’ve never heard of such a thing.
Looks like Tello, Cricket, MobileX, US Mobile and T-Mobile can offer it at least. Apparently it’s often marketed as a Tablet plan, which I suppose makes sense, but it seems a lot of carriers allow you to disable SMS in their web portals these days. I thought it’d be more niche in the US but it seems a more common option than I thought.
It’s been interesting to hear from you about your perspective on this, thanks!
I personally occasionally experience delayed sending/receiving of messages, or messages suddenly coming in in bulk. Only very rarely do messages not come in at all thankfully, but mostly the occasional delays in sending/receiving I think led to the reputation of poor reliablity for SMS. But it makes sense that the US would try to keep those issues to a minimum if so many people still use it, whereas in Europe perhaps it’s less of a priority?
Ah, I see. Your point isn’t necessarily reliability but availability. It’s an interesting perspective to hear that the US appears to be so behind (at least from a European perspective of course) when it comes to messaging apps. As far as SMS reliablity goes, I have occasionally had messages not send, or have messages come in delayed considerably. Or stuff like 2-factor auth texts not coming in, requesting a new one and then suddenly receiving 3 at a time. Not deal-breaking or anything, just the occasional annoyance.
I don’t think WhatsApp allows you to send a message to someone who doesn’t have the app. So WhatsApp would just inform you. Although I don’t recall the last time someone did not have either WhatsApp or Signal installed. But again, that appears to be far more common in the US?
Do you ever miss the extra features that web messaging brings, like in-chat polls, voice messages, etc…? I’m not sure how much of that RCS supports (because almost nobody uses that here). To me it seems like the convenience of web messaging outweighs the “does person x have app y” question, but that’s probably because I never really have to ask myself that question.
I also just realised that you state that everyone has SMS messaging. There are phone plans available here that don’t offer SMS messaging anymore. You can still receive them, but sending them either doesn’t work or costs a high premium (obviously this disadvantage is offset by a lower price for the rest of the plan). I wouldn’t be entirely surprised if SMS eventually just gets phased out.
If you send someone an SMS you know with great certainty that it will be received.
Not to get too bogged down in this debate or anything, but this European is surprised you say that SMS is reliable. One of the great motivators to use web-based messaging apps is because SMS is so notoriously unreliable, with messages occasionally not receiving or sending. Has SMS reliablity been improved much in recent years? Or is web-based messaging less reliable in your experience?
Genuinely curious btw, I’m not in the same party as the troll elsewhere in this thread.
I only really notice stutters in heavily modded Minecraft, where it’s clearly linked to the garbage collector. In more demanding games I don’t notice any stuttering really. Or at least, none that I can’t easily link to something triggering in the game that is likely causing it.
Sure, perhaps I have slightly lower average FPS compared to a 7800x3D, but I also use this PC for productivity reasons so there the extra oompf really does help. Still, 97% of a framerate that’s already way higher than what my 144Hz monitors support is still well above what my monitors support. I don’t think the performance difference is really noticeable, other than in certain benchmarks or if you try really hard to see them.
It’s considerably faster than a 5800x3D though.
Conservative undertones? I’m curious why you think that. Could you elaborate?
There’s strong tones of anti-corporatism and a clear favour towards communal living. And the obvious “care-for-the-Earth” stuff. But I don’t think those are necessarily conservative. I could see the argument for Christian undertones, but more in the traditional “love thy neighbour” and “custodianship” sense.
It’s additional space around components showing what’s behind it. So you’re seeing more stuff in between windows, making it look less organised imo. The “whitespace” isn’t really white here. It looks like another unnecessary element crammed inbetween two windows that might as well just sit neatly next to one another, making the windows slightly larger. I also like being able to move my mouse to the edge of things (e.g. the taskbar) without ending up in the whitespace, which causes misclicks for me.
Again, my opinion. Not stating absolute truths here.
Vic3 certainly isn’t a shell of Vic2. It’s a considerably more complex and interesting game.
There are however some frustrating and obtuse mechanics, particularly related to warfare. It’s not even that bad once you get into it properly, but as a new player it’s definitely a bit frustrating and it’s definitely different from what players were used to from Vic2.
Honestly the presentation was very underwhelming. Improvements in raster seem fairly small and don’t warrant an upgrade. DLSS still lacks in visual quality and has annoying artifacts, and I worry that the industry will use it as an excuse to release poorly optimised games. Counting DLSS frames as part of the frame count is just misleading.
NVENC is cool, but I don’t use that often enough for it to be a selling point to me.
I’ve been enjoying the memes about the presentation though, because what the fuck was that mess.