Breaking news:
Taiwan firm denies making pagers that detonated in Lebanon | 9 killed, 2,750 injured in Lebanon pager blasts, Hezbollah vows to punish Israel | Nobel Prize season kicks off October 7: Anticipation builds for 2024 laureates | Trump says he got 'very nice call' from Kamala Harris after assassination bid |
Logo

Beyond the Hype: Navigating the future of Intelligence

The rise of AI tools, such as ChatGPT, has transformed household activities and professional fields, with India leveraging its scaling prowess to contribute significantly to the global tech landscape 

15-08-2024

The first time my mom took a real interest in understanding AI and my line of work was when her friends started generating cooking recipes with ChatGPT, and she felt she was missing out and wanted to ask about its usage and effectiveness. Soon, my dad suddenly stopped asking us on the family WhatsApp group to draft messages for him, and started auto-generating them with ChatGPT. This was the first time I saw AI becoming a ‘household brand’ in terms of their ease and versatility of usage, thanks to this Large Language Model (LLM), and I wondered why. I genuinely thought maybe it’s just a fad with a shiny new tool, which people will use for some time and then forget about after getting a series of ‘robotic’, unreal, or vague responses.

AI Practitioners, Human Labelers and Rankers have stuffed enough text in these ‘next-word predictors’ to make them probabilistically generate natural-sounding responses. These word predictors have no real understanding of our physical world, laws of physics, and are easily prone to hallucination and manipulation. Most of the text they’ve extracted patterns from comes from a highly imperfect language offering a much limited set of characters and sounds, illogical spellings, overload of idioms, and not nearly as precisely articulative or scientifically constructed as even much older languages like Sanskrit. On a theoretical level, we know the ‘how’ behind the algorithms utilized in these models, but still don’t definitively know ‘why’ the Attention Mechanism (a key unit of mathematical operations in these models) has such strong predictive performance.

A mentor of mine once told me in all seriousness (and I find it equally as funny) that AI Practitioners are not like Chemists, but ‘Alchemists’, who perform crazy experiments and see what works, essentially relying on empirical results and glorified guesswork at the most fundamental level.

However, even though “All models are wrong but some models are useful”, these existing Foundation Models trained on a bulk of the world’s knowledge, however suboptimally, have reimagined our interactions with the world (both digital and physical) and redefined the definition and bounds of human cognition. Language, despite the intensity of structural imperfections, is still the most natural way humans interact with the world.

I love the way this AI era was described by Microsoft as a philosophical shift of usage from Auto-pilot (like a recommender system automatically running to suggest products) to Co-pilot, consciously and directly invoked by users to perform certain actions. And the diversity of applications, flexibility of usage, and near-natural user experience has triggered an Arms Race across companies releasing more than 1000 such model SKUs and trying to one-up each other by showing performance gains on diverse public benchmark datasets.

There’s hardly an industry that may be untouched with AI and especially this latest wave of Generative AI tools. Aside from ‘modern’ industries such as Tech and Finance, mission-critical areas such as Healthcare, I even know of folks utilizing LLMs in the Education, Legal, and even the Construction industries. Tools like ROSS’ AI Search use LLMs to sift through vast amounts of legal documents and case law, providing lawyers with relevant information in a fraction of the time it would take manually. Similarly, in the healthcare sector LLMs can assist in diagnosing diseases by analyzing patient data and medical literature, helping doctors make more informed decisions. Not to mention one of the first adopters of ChatGPT was the Government of Iceland, initially using it for enhancing public services, guiding its overall Digital Transformation strategy, and now even using it to preserve the Icelandic language. The applications with these large Foundation Models boast proficiency across tailored tasks, and even across multiple modalities such as image, vision, and audio. Combined with the Global Economic Recession, the LLM wave has triggered a landscape of companies across the world to optimize their resources and maximize productivity gains using these tools.

I’ve already seen massive productivity gains as an Engineer by using AI coding tools such as GitHub Copilot and the Chat extension, which can generate the code syntax for me while I focus on broader-scoped engineering considerations and the implementation strategy to embed in the tool’s prompt. I often think it’s funny that I get to use LLM-powered tools to generate code for building enterprise-grade tooling for evaluating LLMs. It’s established at this point that LLMs can streamline content creation, coding, research, customer service etc, freeing up valuable time for professionals to focus on strategic and creative thinking and decisions. According to a study by Accenture, AI applications in healthcare could save the US healthcare economy up to $150 billion annually by 2026. According to a report by McKinsey, AI technologies, including LLMs, could potentially increase global productivity by 1.2% annually. And yes, I definitely used Copilot on Bing to articulate certain arguments and fetch relevant statistics with web citations for this article!

However, it’s crucial to recognize that LLMs are not infallible and have significant limitations. They can generate plausible-sounding but incorrect or nonsensical answers, and there are a huge number of examples with LLMs generating incorrect answers for simple questions, to harmful content affecting company reputation in recent times, and even falling prey to adversarial prompts designed to ‘trick’ LLMs into generating incorrect or incoherent content. Even recently, I saw a screenshot on LinkedIn where even a latest and ‘well-performant’ multimodal model like GPT-4o generated a completely incorrect answer when asked if 3307 is a prime number. These striking limitations underscores the crucial need for human oversight to verify the outputs of these models. Moreover, tasks that require emotional intelligence such as counseling or nuanced negotiations still remain firmly in the human domain, with LLMs likely only able to automate the tactical parts of such tasks, such as generating questions to ask, or drafting a well-sounding argument given tailored context. A report by the World Economic Forum highlights that while AI will displace some jobs, it will also create new roles that require human creativity, oversight, and complex problem-solving. As much as I understand the sense of panic and suspicion with any new technology, especially one of which we may not have a full deterministic understanding, LLMs with their current abilities should really be viewed as powerful tools that augment human capabilities, enabling us to achieve more while critical decisions and ethical considerations still remain under our control and discretion.

Despite these limitations, adapting LLMs towards custom applications is significantly reshaping the job market. A study by McKinsey Global Institute found that up to 800 million jobs could be displaced by automation by 2030, with roles in data entry, bookkeeping, and routine manufacturing being the most affected. And the demand for AI-related jobs is skyrocketing, with the US Bureau of Labor Statistics projecting a 31% growth in data science roles by 2030. In India, a report by NASSCOM indicates that around 69% of routine jobs in sectors like BPO and IT services could be automated, while job postings for AI professionals are increasing by 44% year-over-year. When it comes to custom LLM applications, the US leads in sectors like healthcare for predictive analytics and personalized medicine, and finance for enhanced fraud detection and customer service. In India, LLMs are increasingly used in customer support, with companies like HDFC Bank using a tailored solution to provide personalized suggestions, and reportedly experiencing better customer satisfaction and operational efficiency. While the US does benefit from a more mature tech ecosystem and higher R&D spending, India is also a key player with abundant usage of these models. According to Microsoft and LinkedIn’s 2024 Work Trend Index, “92% of knowledge workers in India use AI at work as compared to the global figure of 75%,” and 80% of leaders in India even prefer AI skills over years-of-experience for their hiring needs. And there are no existing signs of slow downs in usage, as the report continues to say that the global use of Generative AI at work is nearly 2x of that in the last 6 months.

While the US has introduced AI legislations and guidance in California and with an Executive Order, India is also working on legislation and and has updated its AI advisory to acknowledge challenges like transparency and content moderation, and even urging ‘intermediaries’ to implement guardrails such as watermarking techniques for AI-generated videos such as Deepfakes.

All these developments raise a natural question on how we can get educated or updated about these emerging AI capabilities. If you’re looking to blame the system over the lack of formal education or enough emphasis on latest technologies like AI, I assure you no educational program on the planet will be able to help prepare for or foresee this exponential pace of upcoming developments. And I am coming from a position of astronomical privilege as I say this.

I got to specifically pursue my undergraduate degree in “Data Science”, which was introduced as a new major when I entered the University of California San Diego in 2017, at a time when undergraduate degrees in Data Science were emerging and only available at select few colleges even in the US. UC San Diego received $75 million to establish a standalone Data Science Department, and what came with it were a series of new faculty hires, a brand new suite of classes, and Career Fairs and events specifically for Data Science students. The classes were so latest and tailored that we even had to learn to use experimental/beta features of popular AI frameworks, and write real, compilable code in those frameworks on paper in our exams. The field was perceived to be so specialized back then that most tech companies would typically restrict Data Science & AI positions to Masters / PhD applicants, and my peers and I got to break that restriction and heard first-hand from several companies that we made them realize undergrads can also do this work! My academic background and practical experiences in this space helped me secure my first full-time job as an AI Engineer at Microsoft right out of undergrad and as the youngest employee in the AI program that I joined. And yet, what I’ve realized in just a couple years into my job is - my academic background was really helpful for learning foundational concepts and inner workings of AI and Machine Learning methods, which are now deemed as ‘traditional’ in our industry.

The current domain knowledge in AI that is relevant today is already much more advanced and refined than what I studied in college. And even though having foundational knowledge helps ground and place new information in context, I and everyone around me (regardless of our backgrounds or experience) are all learning about the latest developments together.

Unfortunately, I don’t have an easy answer around ramping up with AI skills. Jobs in today’s world will continue to get less and less ‘static’ and more prone to disruptions. Even though change is truly the only constant, the pace of progress has likely never been faster in human history. According to the World Economic Forum, 50% of all employees will need reskilling by 2025. It’s not just about being on our toes in terms of taking the latest courses, networking etc, but to fully embrace the mindset of lifelong learning, and avoid panicking by acknowledging that virtually everyone is in a comparable situation. Part of engaging in this new industrial revolution involves our own critical thinking in terms of identifying grunt work around us and trying to streamline those tasks with the latest tools. After this, a good next step for utilizing AI tools to their potential is to estimate upcoming trends, and attempt to align the surfboard of our skills to learn and ride the upcoming waves.

For the next 5-10 years, the broadest direction of work I foresee will be about reducing the barriers between the physical and digital worlds. Part of this vision is about unlocking effective applications with other kinds of Foundation Models aside from LLMs, such as Large Vision Models for a wide array of image/video understanding and processing tasks. Another key part of manifesting the next suite of developments are related to ‘Agent-based’ applications, where LLMs have the ability to invoke actions in the external world. This can be anything from managing someone’s calendar, to creating workflows and automations for more compound tasks, which can involve one or more models solving chunks of a complex problem and (based on user intent) automatically invoking custom-defined functions to perform real-world actions. The most quintessential and impressive agent-based applications I’ve seen and used first-hand are Software Engineering tools such as GitHub Chat, GitHub Copilot Workspace, and the ‘Devin’ AI Software Engineer. LLM Agents, however, are orders-of-magnitude more expensive and harder to debug and evaluate than individual LLMs. This makes regulating overall costs and justifying business value a key technical challenge, addressing which can ensure that democratization of usage stays on track with latest developments.

One of the major existing approaches to drive down costs is to apply Model Compression techniques to reduce the size of the model, both in terms of memory consumption and number of mathematical calculations involved for training and prediction. The advent of Small Language Models (SLMs) have already shown very comparable performance levels with just a few billion model parameters as typical LLMs have with hundreds of billions of parameters. The size of several SLMs already make them small enough to fit into personal edge devices such as laptops or phones, and offer cost and privacy benefits. However, there is still truly a lot of untapped potential towards making models more efficient. As I once ranted on LinkedIn on a Friday evening, unable to sleep with this thought, studying several results of compression techniques applied to various AI models gave me a ‘philosophical shock’ by finding that an overwhelming proportion of a model’s architecture tends to be practically irrelevant towards predictive performance! Even if such developments in AI continue to rely on empirical evidence, AI researchers will hopefully have experimented enough number of times to find better approaches.

It can cost tens of millions of dollars to train a single LLM from scratch, which only a handful of companies can afford. With enough usage of these models, the hope is that the money earned from serving these models for predictions will amortize the training and computational costs. However, it may be worth noting that with the surplus of choice across hundreds of language models in the market and the pace of new releases, I’ll be surprised if all model vendors are able to yield a sufficient return on their investments.

Regulating costs will also be an important factor to make quicker progress towards Artificial General Intelligence (AGI), which has triggered polarizing debates among AI experts in terms of timelines and extent of capabilities. OpenAI’s original definition for AGI written in 2018 as “outperforming humans at most economically viable work” makes intuitive sense because these expensive models will be prioritized to be used by enterprises to derive return on investments and profits. It does sound like a reasonable proposition to me that progress towards AGI will be iterative, with safety guardrails baked into the models at all steps of the process, to avoid steering a model away from its intended use or autonomously auto-generate its own intent. However, I’m not looking to debate with other experts and point right from wrong. Rather, I want to point out that AGI is a ‘fluid’ concept (even OpenAI has redefined it in recent times), and as these new models compel us to reimagine the branches of cognition and gain clarity on distinguishing human skills from those of AI models, the definition, debates, and timelines of what we feel is AGI will also get adapted accordingly.

In this race to further AGI development, concerns about climate change are likely to become more profound. According to Gartner, AI could consume up to 3.5% of the world’s electricity by 2030. University of Washington’s research estimates that GPT-3’s training is estimated to have used 10 gigawatt-hours, roughly equivalent to the annual electricity consumption of 1000 homes in the US. Aside from research on enhancing model efficiency and enterprise AI tools trying to provide more observability into electricity consumption insights, a broad implicit assumption I see with AI optimists is that the tech will be ‘good enough’ to find solutions to pressing problems like climate change and global warming. Perhaps. Though one could argue this strategy isn’t exactly oriented towards prevention but towards repair.

Overall, all these diverse kinds of developments and challenges can sound overwhelming, and it can raise questions like what can India do here and how can India contribute to this exponentially-growing landscape. I once came across a meme that classified the major global players in tech with the punchline “US Innovates, EU Regulates, China Copies.” All these roles are actually important in their own ways, often complementing each other by showing improvements over each other’s work, which can facilitate a healthy form of competition. India also has its own unique quality which enables it to not only compete, but also complement and contribute to global progress. I actually want to add to this punchline and claim, “India Scales.”

We’ve seen way too many diverse examples of this quality in action. India’s Unified Payments Interface (UPI) has scaled direct and real-time payments unlike any other payment system, and is already being used by 12+ countries and studied by dozens more. ISRO is able to launch satellites for other countries at a fraction of the cost compared to NASA, and even sent a rover to Mars for Rs. 7 per km, cheaper than riding in an Auto in a city. We’ve also seen novel tech scaled alongside the diverse fabric of the country, such as with Flipkart’s Voice Recognition recognizing local dialects. Reliance Jio offered the world’s cheapest data services, acquiring more than 100 million users including first-time internet users in the most remote parts of the country, and became the fastest-adopted technology in human history in 2017 according to NiemanLab. Several novel products and services have often been piloted in India, from free WiFi services at India’s Railway Stations started by Google Station, to dedicated web series on Amazon MiniTV currently only available in India.

Regardless of when it’s the government or private players behind these innovations, India’s geographical, cultural, and linguistic diversity makes its environment really fertile for scaling and democratizing products, while making them useful for diverse user bases and over a tremendous market volume. This is precisely the strength India should hone into and continue to offer. If India can do something for itself, India can do it for the world.

Image

Right to Education: The Socio-Legal Impact on Society

India’s Constitution guarantees the right to education, recognizing it as fundamental for personal

Read More
Image

Stars and Stripes: Of Presidents, Guns and the Star Spangled Banner

The United States has witnessed two recent assassination attempts on former President Donald Trump,

Read More
Image

Remembering My Dear Old Friend Arun Jaitley

A Tribute to a True Friend and Exceptional Leader on His Fifth Death Anniversary

Read More