Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
It’s complicated, but this might be considered a war crime. A key quote from the article:
A booby trap is defined as “any device designed or adapted to kill or injure, and which functions unexpectedly when a person disturbs or approaches an apparently harmless object,” according to Article 7 of a 1996 adaptation of the Convention on Certain Conventional Weapons, which Israel has adopted. The protocol prohibits booby traps “or other devices in the form of apparently harmless portable objects which are specifically designed and constructed to contain explosive material.”
The prohibition is presumably intended to make it less likely that a civilian or other uninvolved person will get injured or killed by one of these seemingly harmless objects. If you’re booby-trapping military equipment or military facilities then that’s not a problem, civilians wouldn’t be using those.
CrowdStrike (CRWD.O), has been sued by shareholders who said the cybersecurity company defrauded them by concealing how its inadequate software testing could cause the July 19 global outage that crashed more than 8 million computers.
In a proposed class action filed on Tuesday night in the Austin, Texas federal court, shareholders said they learned that CrowdStrike’s assurances about its technology were materially false and misleading when a flawed software update disrupted airlines, banks, hospitals and emergency lines around the world.
Basically, the company advertised itself as being one way to the shareholders, they bought in on that basis, and then it turned out they were misrepresenting themselves. Presumably they’re suing the company and not the executives personally because that’s where the money is.
Note that simply owning the shares doesn’t mean that it’s already “their money.” If I buy a share in a company I can’t walk up to it and demand that they give me a portion of the cash from the register. It’s more complicated than that and lawsuits like this are part of that complexity.
That would depend entirely on why OpenAI might go under. The linked article is very sparse on details, but it says:
These expenses alone stack miles ahead of its rivals’ expenditure predictions for 2024.
Which suggests this is likely an OpenAI problem and not an AI in general problem. If OpenAI goes under the rest of the market may actually surge as they devour OpenAI’s abandoned market share.
AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.
I don’t know of a specific “when” where a bunch of engineers left OpenAI all at once. I’ve just seen a lot of articles over the past year with some variation of “<company> is a startup founded by former OpenAI engineers.” There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn’t expect a visible spike on the graph.
Well, my point is that it’s already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.
I wouldn’t be surprised if “safety alignment” is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.
Important to note that the initial form of this treatment is to trigger the growth of teeth that failed to grow in the first place, at least last I read about it. An important first step, but for now it may be dependent on there being an existing “tooth bud” down in the jaw to get going.
I suspect that in the long run we’ll need to figure out how to implant a new tooth bud, probably made using the patient’s stem cells, to grow replacements for teeth that have been lost later in life.
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Yeah, that headline…
I’m a good programmer and I still find LLMs to be great for banging out python scripts to handle one-off tasks. I usually use Copilot, it seems best for that sort of thing. Often the first version of the script will have a bug or misunderstanding in it, but all you need to do is tell the LLM what it did wrong or paste the text of the exception into the chat and it’ll usually fix its own mistakes quite well.
I could write those scripts myself by hand if I wanted to, but they’d take a lot longer and I’d be spending my time on boring stuff. Why not let a machine do the boring stuff? That’s why we have technology.
If you’re careless with your prompting, sure. The “default style” of ChatGPT is widely known at this point. If you want it to sound different you’ll need to provide some context to tell it what you want it to sound like.
Or just use one of the many other LLMs out there to mix things up a bit. When I’m brainstorming I usually use Chatbot Arena to bounce ideas around, it’s a page where you can send a prompt to two randomly-selected LLMs and then by voting on which gave a better response you help rank them on a leaderboard. This way I get to run my prompts through a lot of variety.
Yeah, it’s not stopping me from commenting. I’m only noting the downvotes in this case because I was making a point elsewhere in the thread about the extremely anti-AI sentiment around here. In this case I’m not even saying something positive about it, merely speculating about the reason why Microsoft is doing this, and I guess that’s still being interpreted as “justifying” AI and therefore something worthy of attack.
Copilot has boosted my programming productivity significantly. Bing Chat has replaced Google when it comes to conceptual searches (ie, when I want to learn something, not when I want to find some specific website). I’ve been using Bing image creator extensively for illustrations for a tabletop roleplaying campaign I’m running. I still mostly use Gimp and Stable Diffusion locally for editing those images, but I’ve checked out Paint because of the AI integration and was seriously considering using it. Paint of all things, a program that’s long been considered somewhat of a joke.
That just so happens to describe me to a T. I’m a privacy-minded programmer who came here as part of the Reddit exodus. Because I’m a programmer and am aware of how these AIs function, I am not overly concerned about them and appreciate the capabilities they provide to me. I’m aware of the risks and how to manage them.
The comment I was responding to brought up “Linux is better” unprompted. But that’s in line with the echo, so I guess that’s fine.
I don’t know what specifically Microsoft is planning here, but in the past I’ve taken screenshots of my settings window and uploaded it to Copilot to ask it for help sorting out a problem. It was very useful for Copilot to be able to “see” what my settings were. Since the article describes a series of screenshots being taken over time it could perhaps be meant to provide context to an AI so that it knows what’s been going on.
It is impossible for them to contain more than just random fragments, the models are too small for it to be compressed enough to fit. Even the fragments that have been found are not exact, the AI is “lossy” and hallucinates.
The examples that have been found are examples of overfitting, a flaw in training where the same data gets fed into the training process hundreds or thousands of time over. This is something that modern AI training goes to great lengths to avoid.
The analogy isn’t perfect, no analogy ever is.
In this case the content of the search is all that really matters for the quality of the search. What else would you suggest be recorded, the words-per-minute typing speed, the font size? If they want to improve the search system they need to know how it’s working, and that involves recording the searches.
It’s anonymized and you can opt out. Go ahead and opt out. There’ll still be enough telemetry for them to do their work.
If this isn’t a military battle then that makes Israel’s actions look even worse.
They were triggered indiscriminately. Israel had no way of knowing who was holding each pager or where it was located when it went off.