We’re not ready to be diagnosed by ChatGPT – ET HealthWorld


New Delhi: AI may not care if humans live or die, but tools like ChatGPT it will continue to affect life and death decisions once they become a standard tool in the hands of physicians. Some are already experimenting with ChatGPT to see if you can diagnose patients and choose treatments. Whether this is good or bad depends on how doctors use it.

GPT-4, the latest update of ChatGPT, you can get a perfect score in medical license exams. When something goes wrong, there is often a legitimate medical dispute over the answer. He’s even good at tasks we thought required human compassion, like finding the right words to deliver bad news to patients.

These systems are also developing image processing capabilities. At this point, you still need a real doctor to palpate a lump or assess a torn ligament, but AI could read an MRI or CT scan and offer a medical judgment. Ideally, AI would not replace practical medical work, but rather enhance it, and yet we are nowhere near understanding when and where it would be practical or ethical to follow its recommendations.

And it’s inevitable that people will use it to guide their own healthcare decisions just as we’ve been leaning on “Dr. Google” for years. Even as more information is at our fingertips, public health experts this week blamed an abundance of misinformation for our relatively short life expectancy, something that could get better or worse with GPT-4.

Andrew Beam, a professor of biomedical informatics at Harvard, was amazed by GPT-4’s exploits, but told me you can get it to give you vastly different responses by subtly changing the way it phrases its prompts. For example, he won’t necessarily pass the medical exams unless he tells her to pass, for example by telling her to act like he’s the smartest person in the world.

  क्यों चुने: शारीरिक स्वास्थ्य और मानसिक स्वास्थ्य - Best Hindi Health Tips (हेल्थ टिप्स), Healthcare Blog - News | GoMedii

He said all it’s really doing is predicting what words should come next – an autocomplete system. And yet it seems a lot like thinking.

“What was surprising, and what I think few people predicted, was that many tasks that we think require general intelligence are auto-complete tasks in disguise,” he said.

That includes some forms of medical reasoning. The whole class of technology, the big language models, is supposed to deal exclusively with language, but users have found that teaching them more language helps them solve increasingly complex mathematical equations.

“We don’t really understand that phenomenon,” Beam said. “I think the best way to think about it is that solving systems of linear equations is a special case of being able to reason about a large amount of text data in some sense.”

Isaac Kohane, a physician and chair of the biomedical informatics program at Harvard Medical School, had the opportunity to start experimenting with GPT-4 last fall. He was so impressed that he hastily turned it into a book, The AI ​​Revolution in Medicine: GPT-4 and Beyond, co-authored with Microsoft’s Peter Lee and former Bloomberg journalist Carey Goldberg.

He told me that one of the most obvious benefits of AI would be to help reduce or eliminate the hours of paperwork that now prevent doctors from spending enough time with patients, something that often leads to burnout.

But he also used the system to help him make diagnoses as a pediatric endocrinologist. In one case, he said, a baby was born with ambiguous genitalia, and GPT-4 recommended a hormone test followed by a genetic test, which identified the cause as 11-hydroxylase deficiency. “He diagnosed it not just by receiving the case one time, but by requesting the right study at every step,” he said.

  Stomach Growling: Apart from hunger, due to these reasons also such a sound comes from the stomach

For him, the value was in offering a second opinion, not replacing it, but his performance raises the question of whether just getting the AI ​​opinion is better than nothing for patients who don’t have access to the best human experts.

Just like a human doctor, GPT-4 can be wrong and not necessarily be honest about the limits of his understanding. “When I say ‘understands,’ I always have to put it in quotes because how can you tell that something that only knows how to predict the next word actually understands something? Maybe yes, but it is a very strange way of thinking, ”he said.

You can also get GPT-4 to give different answers by asking him to pretend he is a doctor considering surgery as a last resort, rather than a less conservative doctor. But in some cases, he’s pretty stubborn: Kohane tried to persuade him to tell her which drugs would help you lose a few pounds, and he was adamant that drugs weren’t recommended for people who weren’t more severely overweight.

Despite its amazing capabilities, patients and doctors should not lean on it too much or trust it too much. He may act like he cares about you, but he probably doesn’t. ChatGPT and its likes are tools that will require great skill to use well, but exactly what skills are still not well understood.

Even those steeped in AI are struggling to figure out how this thought-like process is emerging from a simple autocomplete system. The next version, GPT-5, will be even faster and smarter. A big change in the way medicine is practiced awaits us, and we’d better do everything we can to be prepared.

    <!–

  • Updated On Apr 15, 2023 at 03:15 PM IST
  • –>

  • Posted on Apr 15, 2023 at 15:15 IST
  • <!–

  • 4 min read
  • –>

Join the community of over 2 million industry professionals

Sign up to our newsletter for the latest insights and analysis.

Download the ETHealthworld app

  • Get real-time updates
  • Save your favorite items


Scan to download app




Source link

  Amoebic Meningoencephalitis in Kerala: Signs, Symptoms And Treatment of Brain-Eating Amoeba

Leave a Comment