Understanding the regulations and risks of AI – ET HealthWorld


By Balakrishna DR

Artificial intelligence (AI) has gained tremendous visibility in recent years for its ability to show us better ways of doing the things that companies do, with analysis of the results and the ability to rewrite traditional business rules. Summing up his power, Gartner says CEOs and CIOs believe a fifth of their total revenue could come from “machine customers” by 2030.

However, AI is associated with certain inherent risks along with its benefits. One of the most significant risks of AI is social manipulation through algorithms, where bad actors share misinformation for personal agendas that can result in political upheaval and weapon automation. For example, him Nature Machine Intelligence Magazine published a study in June 2021 that found that large language models, which are increasingly used in AI applications, display undesirable stereotypes.

AI negatively affects privacy and security by collecting personal data to monitor activities. This data collection and how it is used to create intelligence can cause breaches that lead to lawsuits. More importantly, data manipulation can lead to bias.

Three phases can cause algorithmic risks in their transition, that is, the data collection stage where the chances of anomalies are high followed by the design stage which could be flawed by assumptions, coding errors and finally the output stage where the interpretation of the data is not correct.

Global AI regulatory standards
All over the world, regulatory authorities in different countries are instilling strict measures to minimize risks.

  Can Monkeypox Be New STD? 95% of Cases Transmitted Via Sexual Activity, Reveals Study. Top Points to Know

Several US states require an annual “bias audit” of automated employment decisions (AEDTs) made during hiring, where the results are available for public scrutiny. There are regulations like the Equal Employment Opportunity Commission (EEOC), which remind companies to be fair in their AI-powered hiring decisions.

The Global Alliance on Artificial intelligence (GPAI) is another regulation instilled to ensure that human rights are part of AI programs. Canada has proposed three laws to safeguard individual privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA).

The Council of Europe (CoE) has issued guidelines on smart security. A standard guideline for AI-related security is the European Union’s General Data Protection Regulation (GDPR). The GDPR regulates how companies can collect, process and store personal data. Since AI models could be using personal data, companies planning to use AI to fuel their business growth need to ensure they are compliant with the GDPR, which protects their AI adoption and also makes it complex.

China recently released ethical guidelines on how AI should not endanger public safety.

Can regulations hinder the use of AI?
While the above list of regulatory bodies was instilled to protect personal data, ensure global security, and prevent bias, in some cases, regulations like GDPR can get in the way of AI. For example, when data is non-compliant, companies are forced to remove this data, which directly affects the overall quantity and quality of data used for intelligence.

But in general, regulations like the GDPR provide a roadmap and framework for how personal data should be collected and processed. The underlying ethics of data collection will ensure that there are no breaches or negative repercussions from data-related intelligence.

  Study finds Artificial Intelligence helps in treating Women with heart attacks - ET HealthWorld

Despite regulations, AI is here to stay
In short, AI is a research-driven fundamental science that can be compared to nanotechnology, quantum physics, and other pioneering innovations that are essential for progress. Curbing this science with regulations will impact its progress and, at the primary level, its utilitarian purposes. Ensuring compliance and maximizing the benefits of AI will require companies to formalize policies committed to ethical standards.

If companies can use AI to its full power and ensure specific general guidelines like privacy, security, bias and protection are followed, then AI will be a game changer. Even more so for young companies on creative journeys.

Finally, the GDPR describes the right to obtain an explanation of the decision made by algorithmsand explainability is key to the freedom of AI.

Balakrishna DR, (Bali), Executive Vice President – Global Head, AI & Automation & ECS, Infosys

(DISCLAIMER: The views expressed are solely those of the author and are not necessarily endorsed by ETHealthworld. ETHealthworld.com shall not be liable for any damage caused to any person or organization directly or indirectly.)

……

    <!–

  • Updated On Apr 13, 2023 at 05:20 AM IST
  • –>

  • Posted on Apr 13, 2023 at 05:20am IST
  • <!–

  • 4 min read
  • –>

Join the community of over 2 million industry professionals

Sign up to our newsletter for the latest insights and analysis.

Download the ETHealthworld app

  • Get real-time updates
  • Save your favorite items


Scan to download app




Source link

  Role of Conversational AI in Healthcare and Pharma industry - ET HealthWorld

Leave a Comment