social.outsourcedmath.com

Because we were looking for more things to do when these clowns decided to write "the letter," and cite our #StochasticParrots paper while saying the opposite of what we write, we @emilymbender Angelina McMillan-Major and @mmitchell_ai wrote a statement in response.
https://www.dair-institute.org/blog/letter-statement-March2023
Tl;dr: The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
On Tuesday March 28, the Future of Life Institute published a letter asking for a six-month minimum moratorium on "training AI systems more powerful than GPT-4," signed by more than 2,000 people, including Turing award winner Yoshua Bengio and one of the world’s richest men, Elon Musk.
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence."
Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem,
and 3) the concentration of power in the hands of a few people which exacerbates social inequities.

While we are not surprised to see this type of letter from a longtermist organization like the Future of Life Institute, which is generally aligned with a vision of the future in which we become radically enhanced posthumans, colonize space, and create trillions of digital people, we are dismayed to see the number of computing professionals who have signed this letter,
and the positive media coverage it has received. It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a "flourishing" or "potentially catastrophic" future.
Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media. This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders.

This website uses cookies to recognize revisiting and logged in users. You accept the usage of these cookies by continue browsing this website.