Computer Science

The Commoditization of LLMs – Communications of the ACM

Large Language Models (LLMs) have not only fascinated technologists and researchers but have also captivated the general public. Leading the charge, OpenAI ChatGPT has inspired the release of numerous open-source models. In this post, I explore the dynamics that are driving the commoditization of LLMs. 

Switching Costs

Low switching costs are a key factor supporting the commoditization of Large Language Models (LLMs). The simplicity of transitioning from one LLM to another is largely due to the use of a common language (English) for queries. This uniformity allows for minimal cost when switching, akin to navigating between different e-commerce websites. While LLM providers might use various APIs, these differences are not substantial enough to significantly raise switching costs.

In contrast, transitioning between different database systems involves considerable expense and complexity. It requires migrating data, updating configurations, managing traffic shifts, adapting to different query languages or dialects, and addressing performance issues. Adding long-term memory [4] to LLMs could increase their value to businesses at the cost of making it more expensive to switch providers. However, for uses that require only the basic functions of LLMs and do not need memory, the costs associated with switching remain minimal.

Competition among Leading Organizations

OpenAI’s GPT 3.5 initially captivated the public and was soon followed by an even more advanced GPT 4.0. Meanwhile, competitors such as Anthropic with Claude 3, Meta with Llama 3.0, Google with Gemini 1.5 Pro, and others have released various models, some of which perform comparably or even better than OpenAI’s offerings, according to benchmarks [1] [5]. 

The availability of large datasets on the Web [3], which are used to train these models, has enabled this rapid development. However, processing and cleaning this data demands substantial investment in both hardware and human resources. Recognizing the strategic importance of AI, large organizations are keen to minimize their dependence on few providers, investing heavily in developing these technologies. This investment has spurred intense competition among these organizations, driving them to continually release improved versions of LLMs and enhance the tools for their use. With new models appearing almost monthly [2], improvements in performance and reductions in cost are likely, leading to even less differentiation between providers’ products.

Open source

Large Language Models are essentially software applications running on hardware, similar to other software products. The software industry has significantly democratized technology through open-source projects such as Linux and Android.

In the realm of artificial intelligence, intense competition among organizations has made open-sourcing LLMs an attractive strategy to level the competitive field. Open source models like Llama and Mistral allow multiple infrastructure providers to enter the market, enhancing competition and lowering the cost of AI services. These models also benefit from community-driven improvements, which in turn benefits the organizations that originally developed them.

Furthermore, open source LLMs serve as a foundation for future research, making experimentation more affordable and reducing the potential for differentiation among competing products. This mirrors the impact of Linux in the server industry, where its rise enabled a variety of providers to offer standardized server solutions at reduced costs, thereby commoditizing server technology.

Conclusion

The factors discussed above suggest that LLMs may become commoditized in the future. Software practitioners should use this insight to evaluate how LLMs can address specific business challenges in a cost-effective manner. Researchers can leverage this trend to identify new areas of study, that can leverage LLMs.

References

[1]       Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E. Gonzalez, and Ion Stoica. 2024. Chatbot arena: An open platform for evaluating llms by human preference. arXiv.org. Retrieved May 4, 2024 from https://arxiv.org/abs/2403.04132

[2]       klu.ai. Retrieved May 4, 2024 from https://klu.ai/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fklu-large-language-models-timeline.e9fde945.png&w=750&q=100

[3]       Yang Liu, Jiahuan Cao, Chongyu Liu, Kai Ding, and Lianwen Jin. 2024. Datasets for Large Language Models: A Comprehensive Survey. arXiv.org. Retrieved May 4, 2024 from https://arxiv.org/abs/2402.18041

[4]       Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. 2023. Augmenting Language Models with Long-Term Memory. arXiv.org. Retrieved May 4, 2024 from https://arxiv.org/abs/2306.07174

[5]       LLM Leaderboard 2024. Retrieved May 4, 2024 from https://www.vellum.ai/llm-leaderboard#model-comparison.

Dhiren Amar Navani is a Senior Software Engineer at Zillow. His blog and newsletter are at https://www.softwarebytes.dev/

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button