help-circle
rss













cross-posted from: https://hexbear.net/post/7563423 > RentAHuman is a new digital marketplace connecting AI agents to humans who don't mind taking orders from the computer.
fedilink


1984, Jet Set Willy was released. A great game that every kid at school wanted. Of course we all wanted a copy, but it cost £8 here in the UK, which was several weeks' pocket money. Copying games then involved finding a kid whose Dad was seriously into Hifi and had a stackable stereo system, then we'd copy it with their tape to tape system. But JSW had this as the cassette inlay. How this works? When the game loaded after about 10-15 minutes, it would ask what colours were in Grid square A5, or H9 etc. Get it wrong twice and the game would exit and you'd need to start over. (If you're wondering what happens if you're colour blind - you could write to the publishers and if they accepted your complaint, they would ask you to send them the game and would give you a cheque to cover the refund) Of course, kids are determined and inventive, and this was well before photocopiers or digital cameras, so we would spend our lunchtimes with pencil and paper writing down every single combination... It was a good game, with some great music, but really really hard. (Credit to https://intarch.ac.uk/journal/issue45/2/1.html for the picture, and the page also goes into more depth)
fedilink

- Reid Hoffman (2,658 Files) - Bill Gates (2,592 Files) - Peter Thiel (2,281 Files) - Elon Musk (1,116 Files) - Larry Page (314 Files) - Sergey Brin (294 Files) - Mark Zuckerberg (282 Files) - Jeff Bezos (196 Files) - Eric Schmidt (193 Files)
fedilink

Lemmings, your thoughts on Fastmail vs Proton Mail?
Looks like FM is proprietary (?), is based in a 5-eyes country, and has its servers in the US, but, uh, apart from that...?
fedilink


Defeating a 40-year-old copy protection dongle – Dmitry Brant
cross-posted from: https://feditown.com/post/2498813
fedilink






Al Jazeera has condemned YouTube’s compliance with an Israeli law banning the network’s livestreams in the country, warning that the move signals how major tech companies can be “co-opted as instruments of regimes hostile to freedom”. YouTube’s submission to Israel’s ban became apparent on Wednesday, days after Israeli Communications Minister Shlomo Karahi ordered a 90-day extension of an existing ban on the network’s operations in Israel, blocking broadcasting and internet companies from carrying the network’s content.
fedilink



okay, first off: hey, what the fuck? mozilla has a A THOUSAND FOUR HUNDRED MILLIONS of dollars? what were all them cookie sales for thunderbird? oh yeah, they have this shell game of interlinked but not really entities so you can play whack-a-mole till the death of the universe. and they wanna burn it dicking around with AI? fuck each and every ghoul steering that ship into the abyss, and they can take Gnome's fucking shaman or whatwasit with them. is everybody insane, "silicon valley" was supposed to be a satire, these cretins make mike judge seem like a prophet...
fedilink


Any ideas where I can get a dedicated book scanner?
(For starters, please don't respond "Just go to anna's archive" or something like that; they have a lot of stuff, but they don't have everything, and sometimes the quality of their scans are less than stellar). I'd like to be able to be able to *easily* scan some of my books and get decent-looking PDFs as a result. I have a flatbed scanner, but in my experience, you have to really really *really* mash the book flat up against the platen and practically break the spine to get a good quality image. Even doing this often requires several tries to get it right. I can do this, but it's very labor intesive, very time-consuming, and probably not great for the book being scanned. I would love to have a *dedicated* book scanner—a scanner specially designed for books; I've seen some for the academic market that have /\ shaped platens that automatically turn the pages and so forth, but they often run to the tens of thousands of dollars. I've also seen scanners that look like an oversized itty-bitty book light that are placed above the spread of an open book; I don't have any experience with these, but I'm not sure how good the end result is. Well, so if anyone can recommend a relatively easy and quick way to scan my books and get a decent output, I'd be happy to hear from you. P.S. I didn't mention epubs anywhere because I can't stand them (if they work for you, great, but not for me!)
fedilink



**PE-AV - Audiovisual Perception with Code** * Meta's perception encoder for audio-visual understanding with open code release. * Processes both visual and audio information to isolate sound sources. * [Paper](https://go.meta.me/e541b6) | [Code](https://go.meta.me/7fbef0) https://preview.redd.it/k6lp7cgbou8g1.png?width=1456&format=png&auto=webp&s=f928bbd8d184e9094e7130cb36adff5f51830a80 **T5Gemma 2 - Open Encoder-Decoder** * Next generation encoder-decoder model with full open-source weights. * Combines bidirectional understanding with flexible text generation. * [Blog](https://blog.google/technology/developers/t5gemma-2/) | [Mode](https://huggingface.co/google/t5gemma-2-270m-270m)l **Qwen-Image-Layered - Open Image Decomposition** * Decomposes images into editable RGBA layers with full model release. * Each layer can be independently manipulated for precise editing. * [Hugging Face](https://huggingface.co/QwenLM/Qwen-Image-Layered) | [Paper](https://arxiv.org/abs/2512.16776) | [Demo](https://huggingface.co/spaces/Qwen/Qwen-Image-Layered-Demo) https://reddit.com/link/1ptg2x9/video/72skjufkou8g1/player **N3D-VLM - Open 3D Vision-Language Model** * Native 3D spatial reasoning with open weights and code. * Understands depth and spatial relationships without 2D distortions. * [GitHub](https://github.com/W-Ted/N3D-VLM) | [Model](https://huggingface.co/yuxinhk/N3D-VLM) https://reddit.com/link/1ptg2x9/video/h1npuq1mou8g1/player **Generative Refocusing - Open Depth Control** * Controls depth of field in images with full code release. * Simulates camera focus changes through 3D scene inference. * [Website](https://generative-refocusing.github.io/) | [Demo](https://huggingface.co/spaces/nycu-cplab/Genfocus-Demo) | [Paper](https://arxiv.org/abs/2512.16923) | [GitHub](https://github.com/rayray9999/Genfocus) **StereoPilot - Open 2D to 3D Conversion** * Converts 2D videos to stereo 3D with open model and code. * Full source release for VR content creation. * [Website](https://hit-perfect.github.io/StereoPilot/) | [Model](https://huggingface.co/KlingTeam/StereoPilot) | [GitHub](https://github.com/KlingTeam/StereoPilot) | [Paper](https://arxiv.org/abs/2512.16915) https://reddit.com/link/1ptg2x9/video/homrv9tmou8g1/player **Chatterbox Turbo - MIT Licensed TTS** * State-of-the-art text-to-speech under permissive MIT license. * No commercial restrictions or cloud dependencies. * [Hugging Face](https://huggingface.co/p1/Chatterbox-Turbo) https://reddit.com/link/1ptg2x9/video/iceqr03jou8g1/player **FunctionGemma - Open Function Calling** * Lightweight 270M parameter model for function calling with full weights. * Creates specialized function calling models without commercial restrictions. * [Model](https://huggingface.co/google/functiongemma-270m-it) **FoundationMotion - Open Motion Analysis** * Labels spatial movement in videos with full code and dataset release. * Automatic motion pattern identification without manual annotation. * [Paper](http://arxiv.org/abs/2512.10927) | [GitHub](https://github.com/Wolfv0/FoundationMotion/tree/main) | [Demo](https://huggingface.co/spaces/yulu2/FoundationMotion) | [Dataset](https://huggingface.co/datasets/WoWolf/v2-dev/tree/main) **DeContext - Open Image Protection** * Protects images from unwanted AI edits with open-source implementation. * Adds imperceptible perturbations that block manipulation while preserving quality. * [Website](https://linghuiishen.github.io/decontext_project_page/) | [Paper](https://arxiv.org/abs/2512.16625) | [GitHub](https://github.com/LinghuiiShen/DeContext) **EgoX - Open Perspective Transformation** * Transforms third-person videos to first-person with full code release. * Maintains spatial coherence during viewpoint conversion. * [Website](https://keh0t0.github.io/EgoX/) | [Paper](https://arxiv.org/abs/2512.08269) | [GitHub](https://github.com/DAVIAN-Robotics/EgoX) https://reddit.com/link/1ptg2x9/video/2h8x59qpou8g1/player **Step-GUI - Open GUI Automation** * SOTA GUI automation with self-evolving pipeline and open weights. * Full code and model release for interface control. * [Paper](https://huggingface.co/papers/2512.15431) | [GitHub](https://github.com/stepfun-ai/gelab-zero) | [Model](https://huggingface.co/stepfun-ai/GELab-Zero-4B-preview) **IC-Effect - Open Video Effects** * Applies video effects through in-context learning with code release. * Learns effect patterns from examples without fine-tuning. * [Website](https://cuc-mipg.github.io/IC-Effect/) | [GitHub](https://github.com/CUC-MIPG/IC-Effect) | [Paper](https://arxiv.org/abs/2512.15635)
fedilink



> Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest. > **I am no longer needed for the actual technical work of my job.** I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
fedilink



It's interesting to note that data centers don't use very much, despite perception.
fedilink

    Create a post

    This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


    Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


    Rules:

    1: All Lemmy rules apply

    2: Do not post low effort posts

    3: NEVER post naziped*gore stuff

    4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

    5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

    6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

    7: crypto related posts, unless essential, are disallowed

    • 1 user online
    • 123 users / day
    • 209 users / week
    • 488 users / month
    • 1.49K users / 6 months
    • 1 subscriber
    • 4.67K Posts
    • 51.4K Comments
    • Modlog
    Lemmy
    A community of privacy and FOSS enthusiasts, run by Lemmy’s developers

    What is Lemmy.ml

    Rules

    1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct.
    2. Be respectful, especially when disagreeing. Everyone should feel welcome here.
    3. No porn.
    4. No Ads / Spamming.

    Feel free to ask questions over in: