maybe we should not be building our world around the premise that it is
I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.
LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.
I would just like to give props to you for owning up and listening to the information. I do not in any way think that you were wrong in your reasoning, just that there was more context that is likely relevant which you hadn’t been privy to, and once you were informed of it you reevaluated. Not everyone does that and I think a very valuable part of this community is when people do that (I know I’m not always particularly good at it myself).
A bit of a sidenote but I think your comment might have been posted a few times.
I’m surprised I haven’t seen anyone mention one of my favorites:
Spec Ops: The Line.
The risk with going in blind is that it seems like a generic cover-shooter that doesn’t do everything quite as well as its competitors but it actually works to its advantage once you get into it.
If you haven’t tried it, I highly recommend it, you can usually find it for really cheap.