Artificial Intelligence: Is it the Future Hope or the Terminator?
Alan Turing said “…at some stage we should have to expect the machines to take control.” That prognosis is yet in the realm of known unknowns. AI has the potential to improve the human circumstance. Equally AI can upend geopolitics, social structures, derail business models and economies. Exposure is acclerating anxiety. In a fast evolving eco-system how do you regulate that which is yet defining itself.
The fear of machines overwhelming humans has a long, continually evolving history. The world of fiction and movies has exposed audiences to scenarios involving intelligent machines pitted against humans — Ex Machina, The Matrix, Blade Runner, Star Wars, Terminator and Spielberg’s AI — Artificial Intelligence.
The victory of humankind is scripted as inevitable. But as Daniel Kahneman stated, “we think of our future as anticipated memories”. And often anticipated memories are exacerbated by anxiety rooted in fertile imagination vividly enhanced in cinematic expression.
As early as 1950, Alan Turing, the genius mathematician, asked ‘Can Machines Think?’ He later postulated: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.”
In 1965, Irving John Good his war-time colleague at Bletchley Park wrote in the journal ‘Advances in Computers’ that “an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.
Whether machines think and feel is as yet in the realm of known unknowns. But the phenomenon of millions logging on to Artificial General Intelligence apps such as ChatGPT has amplified exposure and accelerated anxiety. This is heightened by the worry expressed by the creators. Earlier this month, Sam Altman, the CEO of the company which created ChatGPT, told the US Senate: “I think if this technology goes wrong, it can go quite wrong…we want to be vocal about that.”
Altman is not alone. Over 350 top executives and researchers in artificial intelligence are signatories to a missive that ominously states: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” AI occupied attention at the G7 meeting as leaders called for ‘guardrails’ on the development of uses.
The confluence of a surge in AI-enabled apps for a variety of purposes ranging from writing to music to video and the upsurge in copyright violations, memes, fake narratives, the saga of Roko’s Basilisk and reports of sentient behaviour have upped fears.
The legal fraternity gushing over AI usage found New York lawyers pulled up for filed briefs citing fake cases generated by ChatGPT. Those in creative arts shaken by AI-generated imitation of a Drake song that pulled half a million viewers must now worry about IPR and automation of creativity– imagine if Hollywood writers on strike were replaced by AI chatbots.
Step away from the landscape of angst and there is no disputing that Artificial Intelligence has the potential to enhance the human circumstance. The prowess of machines in dealing with probabilities was revealed when in May 1997 IBM’s Deep Blue defeated chess champ Gary Kasparov.
At a more profound level, AI platform Deep Mind has deciphered the structure of virtually most proteins creating a pathway for the development of new medicines. The use of tech in the war in Ukraine underlines the potential of AI in defending nations.
AI is a fast-evolving ecosystem of technologies that can bring benefits by bettering predictions, optimising outcomes, delivery of services and welfare objectives in health, public sector, finance, mobility, in education by enabling tutoring of students and training of teachers, agriculture. Imagine the power of AI to transform agriculture in India — for soil testing, weather inputs, crop mapping, and in curbing post-harvest waste to improve yield and incomes.
At the same time, there is no denying that AI can upend geo-social structures, and derail business models and economies. AI has the potential to disrupt sectors, — in the coding of software, technical processes and professional services.
These possibilities have implications. They could widen the schism in the rule-based world order, deepen the potential for domination by tech hegemons and necessitate the expansion of welfare in nation-states. As advanced economies pour billions of dollars, developing economies must worry about being disadvantaged.
Countries are moving with steps to regulate. Italy has banned ChatGPT citing privacy concerns. The EU, traditionally the first mover in regulations on technology, has put out a white paper and the draft of new regulations. The UK has similarly put out a white paper. The US has begun a study to regulate and put out a Blueprint for an AI Bill of Rights. India has issued a strategy paper but has declared no plans yet for laws to regulate AI.
Historically regulation has played catch-up with technology. The pace of innovation in AI is much more challenging — and while laws are national data boundaries are hard to police. Ideally regulation must be tough on abuse, create a high wall for individual rights, and scaffold the space for innovation with soft touch controls.
The big challenge is how to install regulation for what is yet defining itself. Yes, the baby must not be thrown with the bath water but the baby must be first defined. Without clarity, regulation will mimic the fable of the blind men drawing the elephant.
Shankkar Aiyar, political economy analyst, is author of ‘Accidental India’, ‘Aadhaar: A Biometric History of India’s 12-Digit Revolution’ and ‘The Gated Republic –India’s Public Policy Failures and Private Solutions’.