Analysis

AI: Experts Call for a Pause!

 

Today, AI has become the buzzword around corporate corridors. Top executives are in a great hurry to integrate AI with their businesses for ensuring continuity. Else, they fear that they may as well become extinct. AI indeed became a norm in the business world.

Artificial Intelligence is nothing but the ability of a machine to perform cognitive functions that we associate with human minds such as perceiving, reasoning, learning, interacting with the environment, problem-solving, and even exercising creativity.

It is the convergence of algorithmic advances, data proliferation, and tremendous increases in computing power and storage that propelled AI in the recent past into reality. Today, most Machine Learning algorithms detect patterns and learn how to make predictions instead of relying on explicit programming instructions. The algorithms are also capable of improving their efficacy by adapting in response to new data and experiences over time. In short, Machine Learning is the process of automatically discovering patterns in data. And, once a pattern is discovered, it is used to make predictions. It is this process of ML that is ultimately taking us to AI.

In 2013, researchers from Oxford published an analysis about AI stating that jobs such as telemarketing, bookkeeping clerks, computer support specialists, etc., which are of repetitive type/works of unimaginative nature are the ones that are most likely to be replaced by robots, bots, and AI in the next few years.

But come 2022, it became evident that AI products seem to do precisely what the Oxford researchers considered nearly impossible: mimic creativity. AI has become more adept at jobs that were once considered the purview of humans: JP Morgan states that its commercial loan agreements are reviewed by AI in seconds, while in the past it took 360,000 hours of lawyer’s time over the course of a year for similar reviews. Microsoft was reported to have laid off dozens of journalists at MSN replacing them with AI to scan and process content.

Language learning models such as GPT-4—which stands for generative pretrained transformer— are now answering questions and writing articles just as humans do with astonishing flair and precision. For, it has been trained on a trillion parameters. By virtue of having access to such a large dataset, it could learn and understand complex patterns of natural language far better than ChatGPT. It could even handle both text and image-based queries. It thus comes closer to artificial general intelligence (AGI). In other words, it turned out to be as good as human intelligence.

With the emergence of various cognitive technologies covering facial recognition, emotion recognition, speech recognition, natural language processing, etc., there emerged a paradigm shift in the way businesses are run today. For instance, MIT developed ELSA —an AI bot that can act as a psychotherapy counsellor—is perhaps all set to replace cognitive behavioral therapists!

Along with these gains, AI has also posed many ethical risks to businesses. For instance, the AI that many health systems used to spot high-risk patients in need of follow-up care identified only 18% black patients while the actual figure turned out to be 46% of the sick patients. The reason for such problems is the underlying historical bias in the data that is used to train the machine. And as the AI is built to operate at scale, the impact of such biases could be huge.

Against this backdrop, on March 29, a group of AI experts numbering 1,300 signed an open letter to AI labs stating “AI labs are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control”, and called for a moratorium on developing AI systems that are more powerful than the recently launched Large Language Model (LLM), GPT-4.

The letter that came from Future of Life Institute (FLS), a non-profit organization that works for responsible use of AI, further clarified that the “letter does not mean a pause on AI development in general,” but rather “a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”. It further stressed that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium”.

The letter also highlights the apprehension among AI experts about the unintended consequences of AI: One, they are worried that since the newly launched AI systems are human competitive, should we let computers flood us with mis-/biased-information? Two, are we to develop non-human minds that might eventually replace us and thereby risk the loss of control of our civilization? And, third, they strongly believe that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Even, Sam Altman, Co-Founder of OpenAI, observed that Artificial General Intelligence (AGI) technology comes with a “serious risk of misuse, drastic accidents, and social disruption”, and hence calls for a “gradual transition”, for “it gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place.”

In view of these known/unknown challenges, as the letter urges, there is a dire need for the AI labs and experts to work together “to jointly develop and implement” safety protocols for AI design and development, and also subject it to audit “by independent outside experts”. And that obviously calls for a pause to take the stock of the AGI and the pros and cons of its adoption by businesses and draw a safe and ethical roadmap.
 


Image (c) istock.com

16-Apr-2023

More by :  Gollamudi Radha Krishna Murty


Top | Analysis

Views: 3504      Comments: 0





Name *

Email ID

Comment *
 
 Characters
Verification Code*

Can't read? Reload

Please fill the above code for verification.