AI must be developed responsibly to improve mental health outcomes

For years, artificial intelligence has been touted as a potential game-changer for health care in the United States. More than a decade since high tech law hospital systems incentivized to use electronic health records (EHRs) for patient data management, there have been an explosion in the amount of health care data generated, stored, and available to generate insights and make clinical decisions.

The motivation to integrate AI into mental health services has grown during the pandemic. The Kaiser Family Foundation reported a increase in adults experiencing symptoms of anxiety and depression, from 1 in 10 adults before the pandemic to 4 in 10 adults in early 2021. Along with a national shortage of mental health professionals, as well as limited opportunities for in-person mental health support, AI-powered tools could be used as an entry point to care by automatically and remotely measuring and intervening to reduce mental health symptoms .

Many new mental health companies are integrating AI into their product offerings. Woebot Health developed a chatbot that provides on-demand therapy to users through natural language processing (NLP). spring health Leverage machine learning powered by historical patient data to drive personalized treatment recommendations. Big tech companies are also starting to dive into this space: Apple recently partnered with UCLA to develop algorithms that measure symptoms of depression using data collected on Apple devices.

However, we have also seen that the AI ​​is far from perfect. There have been notable bumps in the road in other areas of medicine that reveal the limitations of AI and, in particular, the machine learning models that drive its decision-making. For example, Epic, one of the largest EHR software developers in the United States, has deployed a sepsis prediction tool to hundreds of hospitals. The researchers found that the the tool malfunctioned in many of these hospital systems. A widely used algorithm was used to refer people to “high-risk care management” programs. less likely to refer blacks than whites who were equally sick. As AI products for mental health are released, technologists and clinicians must learn from the past failures of AI tools to create more effective interventions and limit potential harm.

  सोफे के गंदे कवर साफ करने के लिए अपनाएं यह हैक्स, चमक जाएंगे नए जैसे

our recent investigate outlines three areas where AI-powered mental health technologies may underperform in use.

  • Understanding people: First, it can be difficult for AI mental health measurement tools to contextualize the different ways people experience changes in mental health. For example, some people sleep more when experiencing a depressive episode, while others sleep less, and AI tools may not be able to understand these differences without additional human interpretation.
  • Adapt over time: Second, AI technologies must adapt to the ongoing needs of patients as they evolve. For example, during the COVID-19 pandemic, we were forced to adapt to new personal and professional norms. Similarly, AI-powered mental health measurement tools must adapt to new behavioral routines, and treatment tools must offer a new set of options to accommodate changing user priorities.
  • Consistent data collection: Third, AI tools may work differently across devices due to different data access policies created by device manufacturers. For example, many researchers and companies are developing AI mental health measures using data collected from technologies such as smartphones. Apple doesn’t allow developers to collect many types of data available on Android, and many studies have created and validated AI mental health measures exclusively with Android devices.

Knowing these focus areas, investigated if a smartphone-based artificial intelligence tool could measure mental health among people experiencing different mental health symptoms, using different devices. While the tool was fairly accurate, the different symptoms and types of data collected across devices limited what our tool could measure compared to tools tested in more homogeneous populations. As these systems are rolled out to larger and more diverse populations, it will become more difficult to meet the needs of different users.

  Major Exercise Secrets To Live to 100 and Beyond, Trainer Reveals — Eat This Not That

Given these limitations, how do we responsibly develop AI tools that improve mental health care? As a general mindset, technologists should not assume that AI tools will work well when implemented, but instead should continually work with stakeholders to reassess solutions as they underperform or are not aligned with stakeholder needs. interested.

On the one hand, we should not assume that technological solutions are always welcome. History proves this; It is well established that the introduction of EHR increased provider burnout and they are notoriously difficult to use. Similarly, we need to understand how AI mental health technologies can affect different stakeholders within the mental health system.

For example, AI-powered therapy chatbots may be a suitable solution for patients experiencing mild mental health symptoms, but patients experiencing more severe symptoms will need additional support. How do we enable this transfer from a chatbot to a care provider? As another example, continuous measurement tools can provide a remote and less arduous method of measuring the mental health of patients. But who should be allowed to see these measures, and when should they be available? Clinicians, already overwhelmed and experiencing data overload, may not have time to review this data outside of the appointment. At the same time, patients may feel that data collection and sharing violates their privacy.

Organizations implementing AI mental health technologies must understand these complexities to be successful. By working with stakeholders to identify the different ways that AI tools interact with and impact the people who provide and receive care, technologists are more likely to build solutions that improve patient mental health.

  Hypothyroidism: 6 Thyroid-Friendly Foods For Restful Sleep at Night

Dan Adler is a doctoral student at Cornell Tech, where he works in the People-Aware Computing Lab building technology to improve mental health and well-being.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘1389601884702365’);
fbq(‘track’, ‘PageView’);

Leave a Comment