MinekPo1 [it/she]

nya !!! :3333 gay uwu

I’m in a bad place rn so if I’m getting into an argument please tell me to disconnect for a bit as I dont deal with shit like that well :3

  • 1 Post
  • 12 Comments
Joined 2Y ago
cake
Cake day: Jun 14, 2023

help-circle
rss

if you are interested in puzzle games i can recommend Aliensrock and Icelypuzzles


honestly while I agree that slightly longer keys wont be safe for long , but tbh I’m gonna sit a bit more on my 23-bit RSA keys before migrating


most of the video was centred on the player

it wasn’t ?? haha , the movement in the beginning is there because I didn’t bother trimming the movement of the camera :P




autistic complaining

honestly I don’t even know how to interpret ~11.5 μg b/s (micro gram bits per second).

Seriously I get not liking capital letters , but like ESPECIALLY in this case (as ~11.5 b/s and ~11.5 B/s are about as reasonable) , capitalize your units ! also differentiate between GiB (gigi bits) and GB (giga bits).

to be fair , because g and b are not separated by a space , “×” or “•” , g should be interpreted as a prefix , according to SI rules , but its not something most people know about and g is not a valid SI prefix .



Huh, fair enough, its hard to know whats obvious and whats not.

As for the second part, to my slight surprise, it did manage to figure it out. I guess I just suck at using LLMs lol

edit: To clarify, previously I tried only to correct its mistakes over and over again untill I got frustated

edit2: I still feel like chatGPT is struggling with basics of the language. maybe its just me being shit at using LLMs but smh


As I stated on a different comment in this thread, I worded my comment poorly. Why I think this is relevant however that, at least in this case, if an LLM get code which is significantly different from what its trained with, it can make wildly incorrect guesses. While here its because of a language with a… unique syntax, I think this could also be the case for code with a lot of technical debt or weird design decisions.


Sorry, I worded my comment poorly, see my reply to FaceDeer in this thread


Admittedly, I worded my comment poorly. What I meant is that ChatGPT struggled with understanding the semantics and structure of the language.

As an example, try from this this code block

$S__ do
S__-m__w("Hello world!") do

You can, hopefully guess that S__ is a variable which has a method m__w, accessed by using a hyphen, rather than a dot and statements end using a do keyword. ChatGPT missed on all marks.


Something I found is that LLM struggle with weirder cases, when it comes to code.

I once tried getting ChatGPT (though admittedly only 3.5) to generate code in understand SaHuTOrEPoL, which is one of the more esoteric languages I created, and it really struggled with it.


What in the name of Santa is that font