AI experts disown Musk-backed campaign citing their research – ET HealthWorld


London: four artificial intelligence Experts have raised concerns after his work was cited in a open letter – co-signed by Elon Musk – demanding a urgent break on the research.

The letter, dated March 22 and with more than 1,800 signatures as of Friday, called for a six-month break in developing systems “more powerful” than the new Microsoft-backed OpenAI GPT-4, which can maintain a human-like conversation. , compose songs and summarize long documents.

Since the predecessor of GPT-4 ChatGPT was launched last year, rival companies have rushed to launch similar products.

The open letter says that AI systems with “human-competitive intelligence” pose profound risks to humanity, citing 12 investigations by experts, including university academics, as well as current and former OpenAI employees, Google and its subsidiary DeepMind.

Since then, civil society groups in the US and the EU have lobbied lawmakers to rein in the OpenAI research. OpenAI did not immediately respond to requests for comment.

Critics have accused the Future of Life Institute (FLI), the organization behind the letter which is primarily funded by the Musk Foundation, of prioritizing imaginary doomsday scenarios over more immediate concerns about AI, such as racist or sexist bias.

Among the research cited was “On the Dangers of Stochastic Parrots,” an article co-authored by margaret mitchell, who previously oversaw AI ethics research at Google. Mitchell, now chief ethical scientist at artificial intelligence firm Hugging Face, criticized the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.

“By treating many questionable ideas as fact, the letter affirms a set of priorities and a narrative on AI that benefits FLI supporters,” he said. “Ignoring active damage right now is a privilege some of us don’t have.”

Mitchell and his co-authors, Timnit Gebru, Emily M. Bender, and Angelina McMillan-Major, subsequently published a response to the letter, accusing its authors of “AI bullying and exaggeration.”

  Bored of eating the same type of oats to lose weight? So try these dishes made of oats

“It is dangerous to be distracted by an AI-enabled fantasized utopia or apocalypse that promises a ‘flourishing’ or ‘potentially catastrophic’ future,” they wrote.

“The responsibility itself does not lie with the artifacts but with their builders.” FLI president Max Tegmark told Reuters the campaign was not an attempt to hamper OpenAI’s corporate advantage.

“It’s quite funny. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Musk was not involved in writing the letter. “This is not about a company.”

RISK NOW

Shiri Dori-Hacohen, assistant professor at the University of Connecticuthe told Reuters he agreed with some points in the letter, but took issue with the way his work was cited.

Last year, she co-authored a research paper arguing that the widespread use of AI already posed serious risks. Her research argued that the current use of AI systems could influence decision-making in relation to climate change, nuclear war and other existential threats.

She said: “AI doesn’t need to reach human-level intelligence to exacerbate those risks. There are non-existential risks that are very, very important, but they don’t get the same kind of Hollywood-level attention.”

Asked to comment on the criticism, FLI’s Tegmark said that both the short- and long-term risks of AI need to be taken seriously. “If we quote someone, it just means that we are saying that they are endorsing that sentence. It doesn’t mean that I’m endorsing the letter, or that we endorse everything you think,” he told a news agency.

Dan Hendrycks, director of the California-based Center for AI Security, who was also quoted in the letter, defended its content, telling Reuters that it was sensible to consider black swan events, those that seem unlikely, but that would have devastating consequences.

  The catechu used in paan is of great use, it cures many diseases... know its benefits

The open letter also warned that generative artificial intelligence tools could be used to flood the internet with “propaganda and falsehood.” Dori-Hacohen said it was “quite empowering” that Musk signed it, citing a reported rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others. Musk and Twitter did not immediately respond to requests for comment.



Source link

Leave a Comment