How Big Tech Manipulates Academia to Avoid Regulation
theintercept.com
external-link
“AI ethics” is a field that barely existed before 2017. It’s become a Silicon Valley-led lobby to avoid legal restrictions of controversial technologies.
AutoTL;DR
bot account
link
fedilink
English
21Y

This is the best summary I could come up with:


Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University.

Inspired by whistleblower Signe Swenson and others who have spoken out, I have decided to report what I came to learn regarding Ito’s role in shaping the field of AI ethics, since this is a matter of public concern.

At the Media Lab, I learned that the discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

Although the Silicon Valley lobbying effort has consolidated academic interest in “ethical AI” and “fair algorithms” since 2016, a handful of papers on these topics had appeared in earlier years, even if framed differently.

I wrote, “If tens of millions of dollars from nonprofit foundations and individual donors are not enough to allow us to take a bold position and join the right side, I don’t know what would be.” (Omidyar funds The Intercept.)

For example, the board notes that although “the term ‘fairness’ is often cited in the AI community,” the recommendations avoid this term because of “the DoD mantra that fights should not be fair, as DoD aims to create the conditions to maintain an unfair advantage over any potential adversaries.” Thus, “some applications will be permissibly and justifiably biased,” specifically “to target certain adversarial combatants more successfully.” The Pentagon’s conception of AI ethics forecloses many important possibilities for moral deliberation, such as the prohibition of drones for targeted killing.


The original article contains 3,335 words, the summary contains 270 words. Saved 92%. I’m a bot and I’m open source!

Match!!
link
fedilink
English
91Y

should be readily apparent that no AI used to kill can ever be ethical

@[email protected]
link
fedilink
English
11Y

Are you suggesting it’s never ethical to kill? Nothing is black and white, especially when it comes to ethics.

But if it kills everyone, it can be fair.

Match!!
link
fedilink
English
91Y

this is a great illustration of the difference between fair and ethical

Equality through annihilation.

@[email protected]
link
fedilink
English
81Y

But how will we automate our trolley problems?

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 40 users / day
  • 139 users / week
  • 305 users / month
  • 2.32K users / 6 months
  • 1 subscriber
  • 3.01K Posts
  • 43.4K Comments
  • Modlog