
Twenty-five years ago, the fear of Y2K gripped the world and India was no exception. The fear was that computers would crash when 1999 ended due to a misunderstanding of how to store years. Computers were programmed to store two digits for the year, so a year written in 00 could be read as 1900 instead of 2000. It proved to be a false alarm, but the year 2000 became a turning point for India’s IT industry.
By 2024, people were completely preoccupied with fear artificial intelligence (Amnesty International). It is true that technology has the potential to reshape almost every aspect of human life. From healthcare and education to manufacturing, LogisticsEven entertainment, this emerging technology is revolutionizing industries, improving efficiency, and opening up new possibilities. Then why fear? This is because it is the first technology in human history that has the ability to escape from human control. The debate that has dominated conversations throughout 2024 and will continue to do so in the years to come is: Is AI a boon to humanity, or will it turn into a Frankenstein? This is best summed up by Prime Minister Narendra Modi when he noted that global security would face a major threat if AI-powered weapons reached terrorist organizations.
Electronics and IT Minister Ashwini Vaishnaw raised four major issues as AI gains speed and fame: fair compensation for content creators, algorithmic bias of digital platforms, impact on intellectual property and, last, that arises as a result of all three, whether it is safe or not. The shelter-in-place provisions, which grant legal immunity to social media platforms, should be reconsidered.
These fears are not unfounded. Google’s generative AI platform Gemini has become an embarrassment to the tech industry for providing biased answers to questions about history, politics, gender and race. The Indian government saw red in the response that suggested the Prime Minister was a fascist. However, the same question regarding Ukrainian President Volodymyr Zelensky and Donald Trump elicited quite diplomatic answers. Likewise, actor Rashmi Mandanna’s fake video, which was followed by videos of several celebrities across the spectrum, was a preview of what this technology could do even for ordinary citizens over time.
Beyond fear and concerns, the year also witnessed a discussion about the positive attributes of technology and how its benefits should be utilized. Large Language Models (LLMs), the basic architecture on which AI models are built and which can perform a wide range of tasks, have led to a sharp division of views. Infosys co-founders NR Narayana Murthy and Nandan Nilekani opposed India entering the race to build MBA programmes, and instead focused on use cases, which can be implemented through Small Language Models (SLM). The duo expressed a strong rationale for the same: LLM holders need to be trained on large data sets, which takes several years, and big tech companies have gained ground. There is no point in reinventing the wheel and entering this race with limited resources. Indian companies would do well to focus on managing SLMs that are use case specific, according to this logic.
This was answered by Google Research India Head Manish Gupta, who opined that building foundations is required to build use cases. Gupta cited the case of Aadhaar, built by Nilekani, where the foundation was built first and the use cases later. There are a few cases of Indian companies building and unveiling LLMs, but the majority focus on use cases. Although the jury is still out on the preferred path to be adopted, indications are that it will be a combination of both, with greater success in SLMs than LLMs.
In a country like India, the impact of these technologies on jobs is another crucial area that is still passionately debated. While Luddites offer an extreme view on the dangers of widespread job losses, progressives are a bit skeptical. Rishad Premji, CEO, Wipro, summed it up well when he noted that the transformative potential of AI is expected to disrupt the job market, as tasks rather than job roles become the focal point.
The general consensus is that routine, monotonous jobs will risk elimination, but skilled jobs will be in demand. As a result, continuous improvement of skills is required to remain relevant in the age of artificial intelligence. But is this really so? Historian Yuval Noah Harari shed light on such prescriptions when he showed in his book Nexus how some routine jobs are harder to automate than skilled jobs. For example, he noted that playing chess is easier to automate than, say, washing dishes. Likewise, society today may place a premium on the job of doctors and devalue nurses, but the truth is that AI has the potential to automate the job of the former but not the latter.
With such high risks in all areas of life, it is only natural that this year will see all segments of society interested in how artificial intelligence might unravel. Deepfakes, cybersecurity, data theft, etc. are on the downside while harnessing technology in education, healthcare, and agriculture can bring huge benefits. Government certainly has a role in setting regulatory policy. However, 2024 saw a struggle between governments across continents over how to build a regulatory framework to govern its use. While every country needs to do its part, AI policy and regulation will also need global consensus because the technology is not limited by geographical borders. This is a debate that will continue in 2025.