• 0 Posts
  • 24 Comments
Joined 2Y ago
cake
Cake day: Jul 02, 2023

help-circle
rss

Chatgpt told me 188 but then admitted it was 120 after I bullied it and then landed on 95

Edit:

Great news gamers, chatgpt just explained why despite a net loss in 2023 the industry is expected to turn a profit by 2029


Feels right. As much as I wanted to try and build/play some kind of support or utility, the character balance just wildly favours hard-carries with ults that pretty much guarantee at least one kill.

I think if they can tweak for some more reasonable support/utility then we’ll see better outcomes across the board


As someone older than 33, seeing “preserves” and “and it still works!” hits in a way I was not expecting to get hit this morning


I agree that the author didn’t do a great job explaining, but they are right about a few things.

Primarily, LLMs are not truth machines. That just flatly and plainly not what they are. No researcher, not even OpenAI makes such a claim.

The problem is the public perception that they are. Or that they almost are. Because a lot of time, they’re right. They might even be right more frequently than some people’s dumber friends. And even when they’re wrong, they sound right. Even when it’s wrong, it still sounds smarter than most peoples smartest friends.

So, I think that the point is that there is a perception gap between what LLMs are, and what people THINK that they are.

As long as the perception is more optimistic than the reality, a bubble of some kind will exist. But just because there is a “reckoning” somewhere in the future doesn’t imply it will crash to nothing. It just means the investment will align more closely to realistic expectations as the clarity of what realistic expectations even are become more clear.

LLMs are going to revolutionize and also destroy many industries. It will absolutely fundamentally change the way we interact with technology. No doubt…but for applications which strictly demand correctness, they are not appropriate tools. And investors don’t really understand that yet.


They had a c&c team shooter at one point that was a blast.

Imho they really missed out on defining the split-role FPS/RTS genre, they had all the peices but just never put them together.



I have no idea how poorly the authors of the study communicated their work because I haven’t read the study.

Jumping to the conclusion that it’s junk because some news blogger wrote an awkward and confusing article about it isn’t fair at all. The press CONSISTENTLY writes absolute trash on the basis of scientific papers. That’s like, science reporting 101.

And, based on what you’re saying, this still sounds completely different. RNA sequencing may be a mechanism to “why”, but you would knock my fucking socks off if you could use RNA to predict the physical geometry of a fingerprint. If you could say we have a fingerprint, and we have some RNA, do they belong to the same person? That would be unbelievably massive.


Right, so this methodology is a completely different approach. I don’t think it’s fair to call snake oil on this specifically with the justification that other models (using an entirely different approach) were.

Again, not saying it’s real or not, I’m just saying that it’s appropriate to try new approaches to examine things we already THINK we know, and to be prepared to carefully and fairly evaluate new data that calls into question things we thought we knew. That’s just science.


I mean, the research is the research and the data is the data.

If there are specific critiques to the methodology of the research that calls the validity of the observed data into question, that’s fair. “It’s ‘well known’ that…” Isn’t a scientific argument. It’s actually the exact opposite, it’s literally religion.

Also, the conclusions being drawn from the data by the researchers or 3rd parties might be a problem.

To be fair, ML of today is unrecognizable to what it was in 2008. And, I’d be willing to bet the model your cousin was exposed to wasn’t a machine learning model, and instead some handcrafted marker analysis with dubious justification but a great sales team.

The great thing about ML science is that it’s super accessible. This was an undergrad project. The next step, to establish the validity, really just requires a larger data set. If it’s bogus, that’ll come out. If it’s valid, that’ll come out too. The cost of reproducibility is so low that even hobbiests can verify the results.


That is not at all what this article is about. The headline is terrible.

The research is suggesting that there may exist “per-person” fingerprint markers, whereas right now we only use “per-finger” markers. It’s suggesting that they could look at two different fingers, (left index and right pinky, for example) and say “these two fingerprints are from the same person”.

When they say “not unique”, they mean “there appear to be markers common to all fingerprints of the same person”


The title of the article is so misleading it’s pretty much wrong.

If you read the article, what the researchers did was train an AI model that appears to be able to associate different fingerprints of the SAME person.

Example:Assume your finger prints are not on record. You do a crime and you accidentally leave a fingerprint of your left index finger at the scene.

THEN you do another crime and leave your RIGHT MIDDLE finger print at the scene.

The premise is that the AI model appears to be able to correlate DIFFERENT prints from the SAME person.

So, I’m the context of the research, they aren’t saying that there is reason to believe that there exist fingerprint markers that might be present on a per-person basis, rather than strictly a per-finger basis.

Terrible headline, terribly written article, and IMO not nearly enough evidence that the correlation actually exists and even less evidence that it’s appropriate to be able to be used as evidence.

That being said, based on the comments in the comments section I think most people didn’t really grok what this research was, which is understandable based on the terrible headline


Even in that case, it’s easy enough to solve: grant permission explicitly under the condition that the assets remain in the context of the game (eg, don’t export them to other games).

Consider other games that explicitly provide a blanket grant for people permission to use their game footage in videos (Team17).


The modding community is the reason Bethesda has been able to get away with selling the same game for over a decade.

There are a million ways to solve the “legal problem”, such as “don’t initiate legal action against moddders”.

This wasn’t a problem that needed a solution.


As opposed to… Late Firefly? Like, season 2?


That’s an interesting perspective that I hadn’t considered.

I’m not big on doomscrolling, I don’t have Facebook or Instagram or Twitter… I MOSTLY use my phone for activities that involve dialogue. I’d never really considered that this maybe isn’t representative of broader behaviour.

Has this always been the case? Did the phone changes meet existing behaviour, or drive people to a fundamentally different behaviour?



I’m sure you know someone with a phone like this right now.


It’s crazy how the authour keeps shutting on the phone, being like “wow we’ve learned so much since then”, but physical keyboards were the fucking best.

Touchscreen keyboards are super error prone and you need to physically look at it as you type. It used to be the case that you could write and send messages without needing to look at your phone at all. Under your desk while you kept eye contact and a verbal discussion with your teacher and they wouldn’t even know.


In this case the reason that you see the rest of the pack in your rear view mirror isn’t because you’re in the lead: it’s because you’re getting lapped.

I strongly encourage you to reach out to Linus directly to inform him of your insights. Please post back with the results.


I think people underestimate the challenges involved when building software systems tightly coupled to the underlying hardware (like if you are a team tasked with building a next gen server).

Successful companies in the space don’t underestimate it though, the engineers who do the work don’t underestimate it, and Linus doesn’t underestimate it either.

The domain knowledge in your org required to mitigate the business risk isn’t trivial. The value proposition always needs to be pretty juicy to overcome the inertia present caused by institutional familiarity. Like, can we save a few million on silicon? Sure. Do we think we understand the challenges well enough to keep our hardware release schedules without taking shortcuts that will result in reputational impact? Do we think we have the right people in place to oversee the switch?

Over and over again, it comes back to “is it worth it”, and it’s much more complex of a question to offer than just picking the cheaper chips.

I imagine at this point there is probably a metric fuckton of enterprise software what strictly dictate that it must be run on X86. Even if it doesn’t have to. If you stray from the vendor hardware requirements, bullshit or not, you’ll lose your support. There is likely friction on some consumer segments as well on the uptake.


They’re going to be writing the firmware for enterprise grade servers? If not, they’re irrelevant to what Linus is talking about here.


Big cloud provides will take the opportunity to move to ARM as it is cheaper for them.

The cloud isn’t a literal empheral cloud. It’s still a physical thing with physical devices physically linked. Physical ram on physical slots with physical buses and physical chips (not just CPUs, many other ICs are in those machines too). The complexity of the demands of the arrangement and linkages of that physical hardware is incredible.

Nobody is out there writing enterprise server firmware in java. How can you have a java VM when the underlying compents of the physical device don’t have the necessary code to offer the services required by the VM to run?

To be incredibly blunt, and I don’t say this to be rude, your questions and assertions are incredibly ignorant. So much so that it’s essentially nonsense. It’s like asking “why do we still even have water when we have monster energy drink?” It demonstrates such a fundamental misunderstanding of the premise that it’s honestly difficult to even know where to begin explaining how faulty the line of thinking even is.

Linus isn’t talking about JS developers at all. Even a little bit. I promise you, you would not enjoy hearing his unfiltered thoughts on JS developers.

He’s talking about the professional engineers who design, build, and write firmware for enterprise grade servers. There no overlap between JS coders and these engineers.


The luxuries you have to not know a thing about enterprise grade servers because your world is JavaScript was made possible, and continues to be made possible, by people working on layers that do require familiarity with the underlying hardware.


Certain types of scheduled announcements usually have insider trading blackouts associated with them automatically, like quarterly earnings reports.

But you ABSOLUTELY can time other announcements favourably around your predefined transactions.