I suggested we play Sniper Elite 4 to my girlfriend recently. She’s not usually one for shooter games (she’s more of a Stardew Valley/Animal Crossing/Cozy Grove/Coral Island type of gamer), and I wasn’t sure she’d like it. She thought she wouldn’t, but she’s happy to try out a suggestion of mine every once in a while even if she thinks we won’t play it much.
We loaded into San Cellini island and she initially struggled with the controls and how to aim and shoot. The gravity and wind effects weren’t super easy to grasp either (she’s on the Steam Deck as well, making it slightly harder to aim too). She accidentally shot a couple times as I was giving her a quick tutorial, which attracted a Nazi soldier to investigate and try and shoot her (which scared her a bit as she got hit). Off to a rocky start, and I could tell she wasn’t enjoying it at all.
We got to the first tower, where the game gives you a pretty good view of the area and lets you fairly securely shoot a bunch of Nazi soldiers. Time to shoot! Here we learned she gets a bit jumpscared if the game suddenly shows you a slow-motion killcam, especially if I was the one triggering it. Again, not a great start for her.
She struggled, but did hit a couple of them. Then on the 3rd or 4th kill, the game showed her the magic words after the kill: “TESTICLE SHOT”. This caught her completely off guard, immediately exclaiming “WAIT DOES THAT MEAN I SHOT HIS BALLS OFF?!”. I have never witnessed her doing a complete 180 degrees turn on her opinion of a game. Suddenly the game became extremely enjoyable for her. The first time she had a killcam where she could see the Nazi balls pop one after the other was like giving her crack cocaine or something. Total bloodlust.
We’ve played through the entire campaign in a span of two weeks, then the DLC, the overwatch missions (twice to play both roles) and now the survival maps. Every evening after dinner she asks if we can “shoot more Nazi balls”. Her spirit animal is Hugo Stiglitz at this point.
God I love that woman.
Honestly the presentation was very underwhelming. Improvements in raster seem fairly small and don’t warrant an upgrade. DLSS still lacks in visual quality and has annoying artifacts, and I worry that the industry will use it as an excuse to release poorly optimised games. Counting DLSS frames as part of the frame count is just misleading.
NVENC is cool, but I don’t use that often enough for it to be a selling point to me.
I’ve been enjoying the memes about the presentation though, because what the fuck was that mess.
This sounds a lot like what Supreme Commander: Forged Alliance has been doing for ages, and that expansion came out in 2007. The only difference is the unit limit but that’s mostly for performance reasons (and is rarely hit in competitive matches anyway).
How are these mechanics next-gen if they’re more than 15 years old?
But the barrier to entry for publishing a game on Steam is super-low, it’s honestly dead simple. And even though Steam takes a sizeable cut, they do tons of work in exchange w.r.t. promotion, distribution, community management, the modding workshop, Steam Input, testing Steam Deck compatibility, etc…
For indies it’s one of the easiest routes to publish a game. And given the relative success of indies on Steam, it seems to work quite well.
You can download them manually if you want. Updated drivers is rarely that important for performance. Maybe for newer games, but not for 98% of what’s already out there.
And they also mess things up occasionally. Like all those Minecraft performance mods that had to change how the game looked to the driver, because if it looked like Minecraft it’d tune itself and get worse performance instead of better.
It was a little trickier than I remember, they actively promoted illegal ways to obtain the keys, provided the tools to illegally bypass the DRM with them and (and this is what likely caught Nintendo’s attention) they were very actively monetizing it. This was enough to get Yuzu branded as an illegal tool sold to do piracy with.
Ryujinx was far more nebulous as few details were leaked, it seems there Nintendo just swung it’s big legalese dick around. Probably helped by the Yuzu settlement.
If producing an AGI is intractable, why does the human meat-brain exist?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.
There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.
And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.
And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?
This is a gross misrepresentation of the study.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.
That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.
Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.
Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.
Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.
Sure, but even Epic exclusives aren’t any cheaper than the games on Steam. These savings directly go to the game developer/publisher, not the consumer. This means there’s no incentive for the consumer to switch to Epic other than exclusive games, which is a pretty poor reason to switch away from a well-established platform.
I don’t think we are, at all.
I mostly say it because SMS is so ancient. Not encrypted, messages are storied by the carrier and can be requested by the government, etc… In that sense, even a corporate-controlled messaging system that offers E2EE would be a step up. After all, SMS is pretty corporate-controlled too, just different ones. But again, this is very much a European perspective, I can see why in the US this might be different.
iMessage is by far and away the most popular chat platform here, and is largely responsible for Apple’s local dominance in the smartphone market.
Ah true, iPhones are much more popular in the US. Quite interesting actually how that happened, iPhones aren’t all that popular here at all and Android phones dominate the market. I wonder why Apple hasn’t managed to copy their dominance here as well?
I’ve never heard of such a thing.
Looks like Tello, Cricket, MobileX, US Mobile and T-Mobile can offer it at least. Apparently it’s often marketed as a Tablet plan, which I suppose makes sense, but it seems a lot of carriers allow you to disable SMS in their web portals these days. I thought it’d be more niche in the US but it seems a more common option than I thought.
It’s been interesting to hear from you about your perspective on this, thanks!
I personally occasionally experience delayed sending/receiving of messages, or messages suddenly coming in in bulk. Only very rarely do messages not come in at all thankfully, but mostly the occasional delays in sending/receiving I think led to the reputation of poor reliablity for SMS. But it makes sense that the US would try to keep those issues to a minimum if so many people still use it, whereas in Europe perhaps it’s less of a priority?
Ah, I see. Your point isn’t necessarily reliability but availability. It’s an interesting perspective to hear that the US appears to be so behind (at least from a European perspective of course) when it comes to messaging apps. As far as SMS reliablity goes, I have occasionally had messages not send, or have messages come in delayed considerably. Or stuff like 2-factor auth texts not coming in, requesting a new one and then suddenly receiving 3 at a time. Not deal-breaking or anything, just the occasional annoyance.
I don’t think WhatsApp allows you to send a message to someone who doesn’t have the app. So WhatsApp would just inform you. Although I don’t recall the last time someone did not have either WhatsApp or Signal installed. But again, that appears to be far more common in the US?
Do you ever miss the extra features that web messaging brings, like in-chat polls, voice messages, etc…? I’m not sure how much of that RCS supports (because almost nobody uses that here). To me it seems like the convenience of web messaging outweighs the “does person x have app y” question, but that’s probably because I never really have to ask myself that question.
I also just realised that you state that everyone has SMS messaging. There are phone plans available here that don’t offer SMS messaging anymore. You can still receive them, but sending them either doesn’t work or costs a high premium (obviously this disadvantage is offset by a lower price for the rest of the plan). I wouldn’t be entirely surprised if SMS eventually just gets phased out.
If you send someone an SMS you know with great certainty that it will be received.
Not to get too bogged down in this debate or anything, but this European is surprised you say that SMS is reliable. One of the great motivators to use web-based messaging apps is because SMS is so notoriously unreliable, with messages occasionally not receiving or sending. Has SMS reliablity been improved much in recent years? Or is web-based messaging less reliable in your experience?
Genuinely curious btw, I’m not in the same party as the troll elsewhere in this thread.
I only really notice stutters in heavily modded Minecraft, where it’s clearly linked to the garbage collector. In more demanding games I don’t notice any stuttering really. Or at least, none that I can’t easily link to something triggering in the game that is likely causing it.
Sure, perhaps I have slightly lower average FPS compared to a 7800x3D, but I also use this PC for productivity reasons so there the extra oompf really does help. Still, 97% of a framerate that’s already way higher than what my 144Hz monitors support is still well above what my monitors support. I don’t think the performance difference is really noticeable, other than in certain benchmarks or if you try really hard to see them.
It’s considerably faster than a 5800x3D though.
Presumably it does run through Wine.