I’m not super familiar with MacOS, but do you know if Gatekeeper or XProtect run at ring 0?
Gatekeeper does mainly signature checking. XProtect does signature checking on an applications first launch. Both of those things would be pretty stupid to implement in ring 0, so I’m pretty sure they are not.
If they do run at ring 0, would you consider that anticompetitive?
No, as they’re not doing any active monitoring. They’re pretty much the “you downloaded this file from the internet, do you really want to run it?” of MacOS.
I’m almost certain Apple will move or did move to depreciate kernel extensions. Which means it would be the same situation Microsoft wanted to force as you described.
That is indeed the case, but I’m not aware of any Apple products relying on being a kernel extension. Apple is facing action from the EU for locking down devices from device owners, though - mainly applying to phones/tablets. On Macs you can turn pretty much everything off and do whatever you want.
The other argument with Defender is you could at least have a choice to use it or not.
Without providing a proper API Defender (both the free one, and the paid one offering more features) would be able to provide more features than 3rd parties. Microsoft also wouldn’t have an incentive to fix the APIs, as bugs don’t impact them.
The correct way forward here is introducing an API, and moving Defender to it as well - and recent comments from Microsoft point in that direction. If they don’t they’ll probably be forced by the EU in the long run - back then it was just a decision on fair competition, without looking at the technical details: Typically those rulings are just “look, you need to give everybody the same access you have, but we’ll leave it up to you how to do it”. Now we have a lot of damage, so now another department will get active and say “you’ve proven that you can’t make the correct technical decision, so we’ll make it for you”.
A recent precedent for that would be the USB-C charger cable mandate - originally this was “guys, agree on something, we don’t care what”, which mostly worked - we first had pretty much everything micro USB, and then everything USB-C. But as Apple refused the EU went “look, you had a decade to sort it out, so now we’re just telling you that you have to use USB-C”
That’s bullshit. Microsoft wanted to force others to use an API, while keep using kernel level access for Defender (which for enterprise use is a paid product). That’s text book anti competitive. Nobody ever had a problem of Microsoft rolling out and enforcing an API for that if they restrict their own security products to that API as well.
One thing I find very amusing about this is that AMD used to have a reputation for pulling too much power and running hot for years (before zen and bulldozer, when they had otherwise competetive CPUs). And now intel has been struggling with this for years - while AMD increases performance and power efficiency with each generation.
Admittedly I’m just toying around for entertainment purposes - but I didn’t really have any problems of getting anything I wanted to try out with rocm support. Bigger annoyance was different projects targetting specific distributions or specific software versions (mostly ancient python), but as I’m doing everything in containers anyway that also was manageable.
For AI and compute… They’re far behind. CUDA just wins. I hope a joint standard will be coming up soon, but until then Nvidia wins
I got a W6800 recently. I know a nvidia model of the same generation would be faster for AI - but that thing is fast enough to run stable diffusion variants with high resolution pictures locally without getting too annoyed.
Pretty much everybody pushing fingerprints as a sensible thing for accessing a device is fucking up. It is way too easy to obtain a persons fingerprints suitable for device unlocking without them knowing - and that’s ignoring that using fingerprints enables device unlocking with a persons finger against their will.
Ethernet is awesome. Super fast, doesn’t matter how many people are using it,
You wanted to say “Switched Ethernet is awesome”. The big problem of Etherpad before that was the large collision domain, which made things miserable with high load. What Ethernet had going for it before that was the low price - which is why you’ve seen 10base2 setups commonly in homes, while companies often preferred something like Token Ring.
It wasn’t really a replacement - Ethernet was never tied to specific media, and various cabling standards coexisted for a long time. For about a decade you had 10baseT, 10base2, 10base5 and 10baseF deployments in parallel.
I guess when you mention coax you’re thinking about 10base2 - the thin black cables with T-pieces end terminator plugs common in home setups - which only arrived shortly before 10baseT. The first commercially available cabling was 10base5 - those thick yellow cables you’d attach a system to with AUI transceivers. Which still were around as backbone cables in some places until the early 00s.
The really big change in network infrastructure was the introduction of switches instead of hubs - before that you had a collision domain spanning the complete network, now the collision domain was reduced to two devices. Which improved responsiveness of loaded networks to the point where many started switching over from token ring - which in later years also commonly was run over twisted pair, so in many cases switching was possible without touching the cables.
I do have a bunch of the HPs for work related projects - they are pretty nice, and the x86 emulation works pretty good (and at least feels better than the x86 emulation in MacOS) - but a lot of other stuff is problematic, like pretty much no support in Microsofts deployment/imaging tools. So far I haven’t managed to create answer files for unattended installation.
As for Linux - they do at least offer disabling secure boot, so you can boot other stuff. It’d have been nicer to be able to load custom keys, though. It is nice (yet still feeling a bit strange) to have an ARM system with UEFI. A lot of the bits required to make it working either have made it, or are on the way to upstream kernels, so I hope it’ll be usable soon.
Currently for the most stable setup I need to run it from an external SSD as that specific kernel does not have support for the internal NVME devices, and booting that thing is a bit annoying as I couldn’t get the grub on the SSD to play nice with UEFI, so I boot from a different grub, and then chainload the grub on SSD.
Many years ago I bought some old DOS game where Linux runtimes using the original files exists on GOG. What I expected was a disk image or a zip containing the files - what I got was some exe containing the files. Why would I ever try to buy something from someone fucking up something that simple again?
I might buy some indie games from a developer directly - but with a middleman steam is the only option.
I’m running both physical hardware and cloud stuff for different customers. The problem with maintaining physical hardware is getting a team of people with relevant skills together, not the actual work - the effort is small enough that you can’t justify hiring a dedicated network guy, for example, and same applies for other specialities, so you need people capable of debugging and maintaining a wide variety of things.
Getting those always was difficult - and (partially thanks to the cloud stuff) it has become even more difficult by now.
The actual overhead - even when you’re racking the stuff yourself - is minimal. “Put the server in the rack and cable it up” is not hard - my last rack was filled by a high school student in a part of an afternoon, after explaining once how to cable and label everything. I didn’t need to correct anything - which is a better result than many highly paid people I’ve worked with…
So paying for remote hands in the DC, or - if you’re big enough - just order complete racks with racked and pre-cabled servers gets rid of the “put the hardware in”.
Next step is firmware patching and bootstrapping - that happens automatically via network boot. After that it’s provisioning the containers/VMs to run on there - which at this stage isn’t different from how you’d provision it in the cloud.
You do have some minor overhead for hardware monitoring - but you hopefully have some monitoring solution anyway, so adding hardware, and maybe have the DC guys walk past and inform you of any red LEDs isn’t much of an overhead. If hardware fails you can just fail over to a different system - the cost difference to cloud is so big that just having those spare systems is worth it.
I’m not at all surprised by those numbers - about two years ago somebody was considering moving our stuff into the cloud, and asked us to do some math. We’d have ended up paying roughly our yearly hardware budget (including the hours spent on working with hardware we wouldn’t have with a cloud) to host a single of one of our largest servers in the cloud - and we’d have to pay that every year again, while with our own hardware and proper maintenance planned we can let old servers we paid for years ago slowly age out naturally.
I mainly got a steamdeck as the switches are occupied by the kids. For portability it is way worse than the switch - and the removable controllers on the switch are great, especially for some mukltiplayer games. I feel most of the games making proper use of those didn’t make the jump from the wii, though - for movement games in front of the TV the wii is still regularly used. Switch mostly is used as handheld.
Just get a drive from any old notebook of the last 15 years or so someone wants to throw out, and buy a USB to SATA slim cable.