Researchers call ChatGPT maker OpenAI’s transcription tool used in hospitals ‘problematic’ – ET HealthWorld


New Delhi: Creator of ChatGPT Open AI‘s Whispera AI-powered transcription The tool, touted for its accuracy, has come under scrutiny for its tendency to fabricate information, according to a report, adding that experts have called it problematic because the tool is being used in a large number of industries around the world. to translate and transcribe interviews.

According to a report by the AP news agency, experts warn that these lies – a phenomenon known as “hallucinations” – which can include false medical information, violent rhetoric and racial comments, poses serious risks, especially in sensitive areas such as health care.

Despite OpenAI’s warnings against using Whisper in high-risk environments, the tool has been widely adopted in various industries, including healthcare, where it is used to transcribe patient consultations.

What researchers have to say

According to Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year, such errors could have “really serious consequences,” particularly in hospital settings.

“Nobody wants a misdiagnosis. There should be a higher bar,” said Nelson, a professor at Princeton’s Institute for Advanced Study.

Whisper can make up things that haven’t been said.

Researchers have also found that Whisper can make up entire sentences or fragments of text, with studies showing a significant prevalence of hallucinations in both short and long audio samples.

A University of Michigan researcher who conducted a study on public meetings found hallucinations in eight out of 10 audio transcripts he inspected. These inaccuracies raise concerns about the reliability of Whisper transcripts and the possibility of misinterpretation or misrepresentation of information.

  Officer out of surgery, stable after being shot Friday afternoon

OpenAI experts and former employees demand greater transparency and accountability from the company.

“This seems solvable if the company is willing to prioritize it. “It’s problematic if you put this out there and people are too confident in what it can do and integrate it into all these other systems,” added William Saunders, a San Francisco-based research engineer who left OpenAI in February over concerns with the direction of the company. company. .

OpenAI acknowledges the problem and says it is continually working to reduce hallucinations.

  • Posted on Oct 28, 2024 at 11:15 am IST

Join the community of over 2 million industry professionals

Subscribe to our newsletter for the latest insights and analysis.

Download the ETHealthworld app

  • Get real-time updates
  • Save your favorite articles


Scan to download the app




Source link

Leave a Comment