• 2 Posts
  • 24 Comments
Joined 2Y ago
cake
Cake day: Jun 13, 2023

help-circle
rss

God the narrative of Business Insider is gross.

The only thing making SO decline is that they have a CEO. And that CEO is trying to “compete”.

Just keep being a great platform for Q&A and stop chasing profits. People prefer SO because the ansewrds are trustworthy. LLMs will always bullshit you and never be better than a platform free of AI crap.


Which server do you use that caches messages for you, and how do I know if a given XMPP provider has implemented this functionality?

Also, does it also work with e2ee messages?


Hopefully you mean more residential areas in mixed use. Zoning is cancer that destroys the climate



Asymmetric communication allows communication when one party is offline. One of the biggest issues with XMPP is both parties need to be online…



You think YouTube will stop Tor users? That’s blocking them from hundreds of millions of users whose internet blocks YouTube without TorBrowseer.

Don’t worry, big evil corporations wouldn’t shut out such huge market segments.


Yes. Alec Muffett was the guy who setup the most popular Tor darkent site: Facebook. I think he did Twitter’s too. He maintains an excellent list.

https://github.com/alecmuffett/real-world-onion-sites


The US government would never allow Tor to die. They need it to conduct terrorism cyberwarfare


Yeah, we need to make it illegal to block someone from doing a simple GET request just because they’re using a privacy tool.

It should only be legal to block access based on how you act, not based on how you look.


There are many use-cases for Tor. One is anonymity. One is to bypass censorship. The most popular website on the darknet is Facebook.

It doesn’t “defeat the purpose” of using Tor in Tibet to access a Facebook page.



I don’t think a single credible source has shown this to be a vulnerability. You’re talking about an attack that would cost, what, millions of dollars to run per day?



You’re looking at it from the wrong end.

In fact, the international intelligence community is helping me to launder my drug purchasing traffic.


No, that’s exactly how this stuff works. Lay off 80% of writers and keep all your fact checkers and editors.



Double ratchet e2ee and asymmetric communication


Scrolling for the Indians who get fiber for pennies.


Worst part about the cable is the packet loss. It’s legit better to get a slow DSL connection in Germany than a faster cable internet because the packet loss makes RTC unusable





Anyone know which companies are getting contracts to build, install, and own the wind infrastructure?


After being scammed into thinking her daughter was kidnapped, an Arizona woman testified in the US Senate about the dangers side of artificial intelligence technology when in the hands of criminals. Jennifer DeStefano told the Senate judiciary committee about the fear she felt when she received an ominous phone call on a Friday last April. Thinking the unknown number was a doctor’s office, she answered the phone just before 5pm on the final ring. On the other end of the line was her 15-year-old daughter – or at least what sounded exactly like her daughter’s voice. “On the other end was our daughter Briana sobbing and crying saying ‘Mom’.” Briana was on a ski trip when the incident took place so DeStefano assumed she injured herself and was calling let her know. DeStefano heard the voice of her daughter and recreated the interaction for her audience: “‘Mom, I messed up’ with more crying and sobbing. Not thinking twice, I asked her again, ‘OK, what happened?’” She continued: “Suddenly a man’s voice barked at her to ‘lay down and put your head back’.” Panic immediately set in and DeStefano said she then demanded to know what was happening. “Nothing could have prepared me for her response,” Defano said. Defano said she heard her daughter say: “‘Mom these bad men have me. Help me! Help me!’ She begged and pleaded as the phone was taken from her.” “Listen here, I have your daughter. You tell anyone, you call the cops, I am going to pump her stomach so full of drugs,” a man on the line then said to DeStefano. The man then told DeStefano he “would have his way” with her daughter and drop her off in Mexico, and that she’d never see her again. At the time of the phone call, DeStefano was at her other daughter Aubrey’s dance rehearsal. She put the phone on mute and screamed for help, which captured the attention of nearby parents who called 911 for her. DeStefano negotiated with the fake kidnappers until police arrived. At first, they set the ransom at $1m and then lowered it to $50,000 when DeStefano told them such a high price was impossible. She asked for a routing number and wiring instructions but the man refused that method because it could be “traced” and demanded cash instead. DeStefano said she was told that she would be picked up in a white van with bag over her head so that she wouldn’t know where she was going. She said he told her: “If I didn’t have all the money, then we were both going to be dead.” But another parent with her informed her police were aware of AI scams like these. DeStefano then made contact with her actual daughter and husband, who confirmed repeatedly that they were fine. “At that point, I hung up and collapsed to the floor in tears of relief,” DeStefano said. When DeStefano tried to file a police report after the ordeal, she was dismissed and told this was a “prank call”. A survey by McAfee, a computer security software company, found that 70% of people said they weren’t confident they could tell the difference between a cloned voice and the real thing. McAfee also said it takes only three seconds of audio to replicate a person’s voice. DeStefano urged lawmakers to act in order prevent scams like these from hurting other people. She said: “If left uncontrolled, unregulated, and we are left unprotected without consequence, it will rewrite our understanding and perception what is and what is not truth. It will erode our sense of ‘familiar’ as it corrodes our confidence in what is real and what is not.”
fedilink

Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest
The algorithm used for the cash relief program is broken, a Human Rights Watch report found. A program spearheaded by the World Bank that uses algorithmic decision-making to means-test poverty relief money is failing the very people it’s intended to protect, according to a new report by Human Rights Watch. The anti-poverty program in question, known as the Unified Cash Transfer Program, was put in place by the Jordanian government. Having software systems make important choices is often billed as a means of making those choices more rational, fair, and effective. In the case of the poverty relief program, however, the Human Rights Watch investigation found the algorithm relies on stereotypes and faulty assumptions about poverty. “Its formula also flattens the economic complexity of people’s lives into a crude ranking.” “The problem is not merely that the algorithm relies on inaccurate and unreliable data about people’s finances,” the report found. “Its formula also flattens the economic complexity of people’s lives into a crude ranking that pits one household against another, fueling social tension and perceptions of unfairness.” Join Our Newsletter Original reporting. Fearless journalism. Delivered to you. I'm in The program, known in Jordan as Takaful, is meant to solve a real problem: The World Bank provided the Jordanian state with a multibillion-dollar poverty relief loan, but it’s impossible for the loan to cover all of Jordan’s needs. Without enough cash to cut every needy Jordanian a check, Takaful works by analyzing the household income and expenses of every applicant, along with nearly 60 socioeconomic factors like electricity use, car ownership, business licenses, employment history, illness, and gender. These responses are then ranked — using a secret algorithm — to automatically determine who are the poorest and most deserving of relief. The idea is that such a sorting algorithm would direct cash to the most vulnerable Jordanians who are in most dire need of it. According to Human Rights Watch, the algorithm is broken. The rights group’s investigation found that car ownership seems to be a disqualifying factor for many Takaful applicants, even if they are too poor to buy gas to drive the car. Similarly, applicants are penalized for using electricity and water based on the presumption that their ability to afford utility payments is evidence that they are not as destitute as those who can’t. The Human Rights Watch report, however, explains that sometimes electricity usage is high precisely for poverty-related reasons. “For example, a 2020 study of housing sustainability in Amman found that almost 75 percent of low-to-middle income households surveyed lived in apartments with poor thermal insulation, making them more expensive to heat.” In other cases, one Jordanian household may be using more electricity than their neighbors because they are stuck with old, energy-inefficient home appliances. Beyond the technical problems with Takaful itself are the knock-on effects of digital means-testing. The report notes that many people in dire need of relief money lack the internet access to even apply for it, requiring them to find, or pay for, a ride to an internet café, where they are subject to further fees and charges to get online. “Who needs money?” asked one 29-year-old Jordanian Takaful recipient who spoke to Human Rights Watch. “The people who really don’t know how [to apply] or don’t have internet or computer access.” Human Rights Watch also faulted Takaful’s insistence that applicants’ self-reported income match up exactly with their self-reported household expenses, which “fails to recognize how people struggle to make ends meet, or their reliance on credit, support from family, and other ad hoc measures to bridge the gap.” The report found that the rigidity of this step forced people to simply fudge the numbers so that their applications would even be processed, undermining the algorithm’s illusion of objectivity. “Forcing people to mold their hardships to fit the algorithm’s calculus of need,” the report said, “undermines Takaful’s targeting accuracy, and claims by the government and the World Bank that this is the most effective way to maximize limited resources.” Related AI Tries (and Fails) to Detect Weapons in Schools The report, based on 70 interviews with Takaful applicants, Jordanian government workers, and World Bank personnel, emphasizes that the system is part of a broader trend by the World Bank to popularize algorithmically means-tested social benefits over universal programs throughout the developing economies in the so-called Global South. Confounding the dysfunction of an algorithmic program like Takaful is the increasingly held naïve assumption that automated decision-making software is so sophisticated that its results are less likely to be faulty. Just as dazzled ChatGPT users often accept nonsense outputs from the chatbot because the concept of a convincing chatbot is so inherently impressive, artificial intelligence ethicists warn the veneer of automated intelligence surrounding automated welfare distribution leads to a similar myopia. The Jordanian government’s official statement to Human Rights Watch defending Takaful’s underlying technology provides a perfect example: “The methodology categorizes poor households to 10 layers, starting from the poorest to the least poor, then each layer includes 100 sub-layers, using statistical analysis. Thus, resulting in 1,000 readings that differentiate amongst households’ unique welfare status and needs.” “These are technical words that don’t make any sense together.” When Human Rights Watch asked the Distributed AI Research Institute to review these remarks, Alex Hanna, the group’s director of research, concluded, “These are technical words that don’t make any sense together.” DAIR senior researcher Nyalleng Moorosi added, “I think they are using this language as technical obfuscation.” As is the case with virtually all automated decision-making systems, while the people who designed Takaful insist on its fairness and functionality, they refuse to let anyone look under the hood. Though it’s known Takaful uses 57 different criteria to rank poorness, the report notes that the Jordanian National Aid Fund, which administers the system, “declined to disclose the full list of indicators and the specific weights assigned, saying that these were for internal purposes only and ‘constantly changing.’” While fantastical visions of “Terminator”-like artificial intelligences have come to dominate public fears around automated decision-making, other technologists argue civil society ought to focus on real, current harms caused by systems like Takaful, not nightmare scenarios drawn from science fiction. So long as the functionality of Takaful and its ilk remain government and corporate secrets, the extent of those risks will remain unknown.
fedilink