Image credit: Photographer is my life. / Getty Images
A group of eminent AI ethicists wrote a counter-argument to this week’s controversial letter calling for a six-month “pause” in AI development, suggesting that the real harm caused by the misuse of today’s technology will become a virtual future. criticized its focus on threats. .
Thousands of people, including familiar names such as Steve Wozniak and Elon Musk, signed an open letter from the Future of Life Institute earlier this week to put the development of AI models like GPT-4 on hold and “make the loss.” suggested to avoid. Among other threats is to take control of our civilization.
Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell, all leading figures in the field of AI and ethics, were (in addition to their work) kicked out of Google over papers criticizing the capabilities of AI. known for They are now collaborating at the DAIR Institute, a new research institute aimed at studying, uncovering, and preventing AI-related harm.
However, they were not on the list of signatories and are now publishing denunciations accusing the letter of not being involved in existing problems caused by this technology.
“These hypothetical risks are the focal point of a dangerous ideology called long-termism, which ignores the real harm caused by the deployment of today’s AI systems,” they wrote. , data theft, synthetic media underpinning existing power structures, and further concentration. Those power structures with fewer hands.
The choice to worry about a Terminator or Matrix-esque robotic apocalypse is a red herring when, at the same moment, there are reports that companies like Clearview AI are being used by police to essentially frame innocent men You don’t need a T-1000 if you have a ring cam at every front door you can access from your online rubber stamp factory.
The DAIR crew agrees with some of the purposes of the letter, including identifying synthetic media, but we need to act now with the remedies available to us for today’s problems. I am emphasizing that
What we need is regulation that enhances transparency. Not only should synthetic media be made explicit whenever they are encountered, but organizations building these systems should also document and disclose their training data and model architecture. Responsibility for creating tools that are safe to use should lie with the companies that build and deploy the generation systems. This means that the builders of these systems must be held accountable for the output produced by their products.
The current race towards increasingly large-scale “AI experiments” is not a pre-determined path where you can only choose how fast you run it, but a series of decisions driven by a profit motive. Corporate actions and choices must be shaped by regulations that protect the rights and interests of people.
Now is the time to act. But the focus of our concerns should not be the fictitious “powerful digital mind”. Instead, we need to focus on the very real and very current exploitative practices of the companies they claim to build that are rapidly centralizing power and increasing social injustice. there is.
By the way, this letter reflects what I heard from Uncharted Power founder Jessica Matthews at the AfroTech event in Seattle yesterday. We should fear the people who make it. (Her solution is to be the people who build it.)
It is highly unlikely that a large company will agree to suspend research activities following an open letter, but judging from the engagements it has received, the risks of AI – real and hypothetical – are at the forefront of many of the industries. Clearly, this is a major concern in the field. society. But if they don’t do it, someone will probably have to do it for them.