Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 58 Comments
Joined 10M ago
cake
Cake day: Mar 03, 2024

help-circle
rss

If this isn’t a military battle then that makes Israel’s actions look even worse.

They were triggered indiscriminately. Israel had no way of knowing who was holding each pager or where it was located when it went off.


It’s complicated, but this might be considered a war crime. A key quote from the article:

A booby trap is defined as “any device designed or adapted to kill or injure, and which functions unexpectedly when a person disturbs or approaches an apparently harmless object,” according to Article 7 of a 1996 adaptation of the Convention on Certain Conventional Weapons, which Israel has adopted. The protocol prohibits booby traps “or other devices in the form of apparently harmless portable objects which are specifically designed and constructed to contain explosive material.”

The prohibition is presumably intended to make it less likely that a civilian or other uninvolved person will get injured or killed by one of these seemingly harmless objects. If you’re booby-trapping military equipment or military facilities then that’s not a problem, civilians wouldn’t be using those.



That’s not what they’re arguing, not even close.


And unfortunately, this article is also just a response to media clickbait, not a discussion point it tries to look like

And becomes new clickbait in the process.


Replacing people with AI creates a situation where the incentive for people to make original works is greatly diminished,

Why would that be? It should be tye opposite, making VO cheaper means studios can take risks and get experimental. Basically what cheap engines have done for indie development.


not some fucking investors and shareholders that probably kept pressuring CS for the last several years to reduce costs and increase revenue,

This is presumably part of what would be at issue in court. The shareholders are claiming they were lied to. We’ll see how that holds up.


CrowdStrike (CRWD.O), has been sued by shareholders who said the cybersecurity company defrauded them by concealing how its inadequate software testing could cause the July 19 global outage that crashed more than 8 million computers.

In a proposed class action filed on Tuesday night in the Austin, Texas federal court, shareholders said they learned that CrowdStrike’s assurances about its technology were materially false and misleading when a flawed software update disrupted airlines, banks, hospitals and emergency lines around the world.

Basically, the company advertised itself as being one way to the shareholders, they bought in on that basis, and then it turned out they were misrepresenting themselves. Presumably they’re suing the company and not the executives personally because that’s where the money is.

Note that simply owning the shares doesn’t mean that it’s already “their money.” If I buy a share in a company I can’t walk up to it and demand that they give me a portion of the cash from the register. It’s more complicated than that and lawsuits like this are part of that complexity.


That would depend entirely on why OpenAI might go under. The linked article is very sparse on details, but it says:

These expenses alone stack miles ahead of its rivals’ expenditure predictions for 2024.

Which suggests this is likely an OpenAI problem and not an AI in general problem. If OpenAI goes under the rest of the market may actually surge as they devour OpenAI’s abandoned market share.


AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.

I don’t know of a specific “when” where a bunch of engineers left OpenAI all at once. I’ve just seen a lot of articles over the past year with some variation of “<company> is a startup founded by former OpenAI engineers.” There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn’t expect a visible spike on the graph.



Well, my point is that it’s already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.

I wouldn’t be surprised if “safety alignment” is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.


OpenAI is no longer the cutting edge of AI these days, IMO. It’ll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.




The IA is appealing the decision so they’re not out of the woods just yet.


Same here. I’m waiting to see that lawsuit reach its final conclusion, I don’t want to throw good money after bad.

Even afterward, I’m concerned that they might go do some stupid stunt like that again. I’ll want to see if there’s any fallout among their leadership over getting into this situation.


No idea, I’m just repeating caveats I’ve seen raised on this particular news before.


Important to note that the initial form of this treatment is to trigger the growth of teeth that failed to grow in the first place, at least last I read about it. An important first step, but for now it may be dependent on there being an existing “tooth bud” down in the jaw to get going.

I suspect that in the long run we’ll need to figure out how to implant a new tooth bud, probably made using the patient’s stem cells, to grow replacements for teeth that have been lost later in life.


This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.

Yeah, that headline…


But only sometimes. Not often enough that I don’t still find it more useful than not.


This is why I like Bing Chat for this kind of thing, it does a web search in the background and will often be working right from the API documentation.


I’m a good programmer and I still find LLMs to be great for banging out python scripts to handle one-off tasks. I usually use Copilot, it seems best for that sort of thing. Often the first version of the script will have a bug or misunderstanding in it, but all you need to do is tell the LLM what it did wrong or paste the text of the exception into the chat and it’ll usually fix its own mistakes quite well.

I could write those scripts myself by hand if I wanted to, but they’d take a lot longer and I’d be spending my time on boring stuff. Why not let a machine do the boring stuff? That’s why we have technology.


If you’re careless with your prompting, sure. The “default style” of ChatGPT is widely known at this point. If you want it to sound different you’ll need to provide some context to tell it what you want it to sound like.

Or just use one of the many other LLMs out there to mix things up a bit. When I’m brainstorming I usually use Chatbot Arena to bounce ideas around, it’s a page where you can send a prompt to two randomly-selected LLMs and then by voting on which gave a better response you help rank them on a leaderboard. This way I get to run my prompts through a lot of variety.


This thread isn’t about websites, it’s about functions built into operating systems. Those are generally much more configurable. Microsoft wants corporations to run Windows, after all, and corporations tend to be very touchy about this sort of thing.


Yeah, it’s not stopping me from commenting. I’m only noting the downvotes in this case because I was making a point elsewhere in the thread about the extremely anti-AI sentiment around here. In this case I’m not even saying something positive about it, merely speculating about the reason why Microsoft is doing this, and I guess that’s still being interpreted as “justifying” AI and therefore something worthy of attack.


Copilot has boosted my programming productivity significantly. Bing Chat has replaced Google when it comes to conceptual searches (ie, when I want to learn something, not when I want to find some specific website). I’ve been using Bing image creator extensively for illustrations for a tabletop roleplaying campaign I’m running. I still mostly use Gimp and Stable Diffusion locally for editing those images, but I’ve checked out Paint because of the AI integration and was seriously considering using it. Paint of all things, a program that’s long been considered somewhat of a joke.


I’m not overly concerned because I know how to use these things. I know what they do, and so when one of them is doing something concerning I turn it off.

People are frightened of things they don’t understand, and it’s apparent that lots of people don’t understand AI.


I was asked what the reason for this function was, so I speculated on that reason in an attempt to answer the question, and I got downvoted for it.

I wasn’t addressing the privacy concerns at all. That wasn’t part of the question.


That just so happens to describe me to a T. I’m a privacy-minded programmer who came here as part of the Reddit exodus. Because I’m a programmer and am aware of how these AIs function, I am not overly concerned about them and appreciate the capabilities they provide to me. I’m aware of the risks and how to manage them.

The comment I was responding to brought up “Linux is better” unprompted. But that’s in line with the echo, so I guess that’s fine.


I don’t know what specifically Microsoft is planning here, but in the past I’ve taken screenshots of my settings window and uploaded it to Copilot to ask it for help sorting out a problem. It was very useful for Copilot to be able to “see” what my settings were. Since the article describes a series of screenshots being taken over time it could perhaps be meant to provide context to an AI so that it knows what’s been going on.


Well good news, then, that’s not what Microsoft is using AI for this case.


No, I sound like someone who likes many of the new AI-powered features that Microsoft has been coming up with lately.

I don’t use Linux. I don’t think about it at all, it doesn’t affect me.


Check the upvote/downvote counts on my comment vs. macattack’s. It’s nigh impossible to say anything positive about AI around here.


Whereas I’m enjoying many of the new AI-powered features that Microsoft has been coming up with lately.

But echo chambers gonna echo, I guess.


It is impossible for them to contain more than just random fragments, the models are too small for it to be compressed enough to fit. Even the fragments that have been found are not exact, the AI is “lossy” and hallucinates.

The examples that have been found are examples of overfitting, a flaw in training where the same data gets fed into the training process hundreds or thousands of time over. This is something that modern AI training goes to great lengths to avoid.


You could say it’s to “circumvent” the law or you could say it’s to comply with the law. As long as the PII is gone what’s the problem?


The GDPR says that information that has been anonymized, for example through statistical analysis, is fine. LLM training is essentially a form of statistical analysis. There’s hardly anything in law that is “simple.”


Maybe it’s “simple as that” if you’re just expressing an opinion, but what’s the legal basis for it?


The analogy isn’t perfect, no analogy ever is.

In this case the content of the search is all that really matters for the quality of the search. What else would you suggest be recorded, the words-per-minute typing speed, the font size? If they want to improve the search system they need to know how it’s working, and that involves recording the searches.

It’s anonymized and you can opt out. Go ahead and opt out. There’ll still be enough telemetry for them to do their work.