reCAPTCHA WAF Session Token

Why They’re Worried | blog@CACM



1. Introduction

On March 22, 2023, the Future of Life Institute (FLI) published an Open Letter calling for a six-month pause on “the training of AI systems more powerful than GPT-4” (FLI, 2023). Concerned by the existential risks of AI—those that might severely and unalterably harm humankind—it highlights dangers specifically associated with large, opaque generative models. The letter was quickly signed by over 50,000 individuals and sparked passionate discourse. This public interest was not only sparked by the compelling argument put forth but also fueled by endorsements from hundreds of leading researchers and industry stakeholders. Nevertheless, the viewpoints of many experts, particularly those with small public platforms, remain largely unexplored.

It’s unlikely that all signatories unreservedly fear existential risks posed by AI, much less agree with the entire letter. To understand these experts’ concerns beyond existential threats, we conducted interviews and questionnaires with 37 signatories. We sought to understand their personal perspectives on the letter and to develop a comprehensive understanding of foremost AI experts’ deepest sources of distress about their field. The following presents a brief summary of our findings: what brought these experts to sign the letter, what most concerns them about the field, and what they believe needs to change. A more thorough discussion of our conversations is available elsewhere and linked below.

2. Why Sign?

Though most interviewees aligned with the letter’s spirit, many neither anticipated nor advocated for a pause, nor were they primarily concerned about AI’s existential risks. However, all interviewees agreed that the rapid development and deployment of advanced AI systems is inappropriate and potentially very dangerous. Their concern for the current pace of advanced AI deployment was great enough to sign the letter. They hoped it might help mitigate various anticipated risks by alerting the following three groups:

2.1 To Developers: Here’s an Out

Most signatories did not vilify the teams behind cutting-edge models. Many, technologists themselves, empathized with developers and attributed the AI race to industry competition and market forces. By publicly advocating for a slowdown, some hoped the letter could offer evidence for companies who wanted to exercise caution. However, less-optimistic interviewees felt that developers and companies are too driven by their ambition to create groundbreaking technology.

2.2 To Regulators: It’s Time to Act

Alternatively, some signed to encourage policymakers, who typically respond slowly to emerging technologies, to act more swiftly. They hoped that the letter would prompt much-needed and accelerated regulatory dialogue. And while some were skeptical of regulatory bodies’ abilities to rise to the occasion, others expressed cautious optimism about recent initiatives.

2.3 To the Public: Pay Attention

Finally, the signatories sought to bridge the gap between public perception and the actual state of AI. Many felt the mainstream understanding is often distorted, framing AI either as an existential threat or a miraculous solution to all problems. They hoped the open letter would ignite discourse and contribute to a more accurate public understanding of the technology’s ability, potential, and risk.

3. Why Slow Down?

Regardless of their motives for signing, each interviewee was deeply concerned about today’s rapid deployment of AI. However, due to diverse expertise, their concerns varied across different aspects of the field. Interviewees discussed risks associated with both the current state and future development of AI, as well as its potential effects. But every concern stemmed from the same source.

3.1 The Root of the Problem

Across the board, interviewees agreed that rapid deployment of advanced, uninterpretable models with minimal testing is dangerous. Experts cannot fully understand their behavior, and therefore cannot fully control and ensure their safety. Many deemed the expeditious global distribution of such models as irresponsible and contrary to effective engineering practices. Some interviewees traced companies’ readiness to release unmanageable models to a tech culture that values rapid development over thoughtful implementation. As a result, experts feared today’s advanced AI systems pose a number of both direct and indirect risks. Here, we enumerate the most impactful of these risks, while our extended paper offers a complete and more nuanced discussion.

3.2 Direct Effects of Rapid AI Deployment

Signatories voiced significant concern about several problems that today’s advanced AI systems could directly cause due to their complexity and hasty deployment. These issues arise from unpredictable or undesirable model behaviors, whether unintentional or caused by human manipulation. Experts noted how often today’s systems create highly convincing falsehoods that could lead many to believe AI-generated misinformation. Further, these systems might also help spread human-generated misinformation since AI-driven algorithms prioritize profit-maximizing content. Interviewees felt such algorithms are highly susceptible to manipulation campaigns with political motivations. Whether directly lied to or shown false content by AI recommendation engines, the public will grow to mistrust its information sources. If AI-perpetuated misinformation continues to spread, interviewees feared people will begin to have difficulty believing much of what they read or see.

Additionally, signatories were highly preoccupied by the human tendency to socially bond with even simple chatbots and the ways that powerful open source models can be abused by bad actors.

3.3 Indirect Effects of Rapid AI Deployment

While concerns about the direct behavior of today’s advanced AI systems were significant, focus also extended to the ways these models are developed and how they might affect the systems into which they’re integrated. Here, we enumerate two such examples.

3.3.1 Job Displacement

Experts were universally concerned about the magnitude and speed of job displacement caused by the proliferation of AI. Of course, new niches will open up, but academic and industry interviewees alike worried that the pace of AI development is too great for society to adjust comfortably. Some mused that if a massive chunk of the population is left without employment and, therefore, investment in its success, society could be severely destabilized. From a more individual perspective, interviewees expressed uncertainty about the future role of humans if increasingly advanced AI systems allow them to offload complex tasks.

3.3.2 Disparate Power

Some experts also expressed concern about the vast global influence that powerful AI systems hold. They argue that the values of the designers inevitably shape the product, and a small, homogenous group of people with limited experiences should not be solely responsible for developing AI systems that will interact with diverse global users.

These cultural influences are nuanced but are no less impactful than the explicit financial and political power wielded by large tech companies at the forefront of AI development. Many interviewees expressed conviction that such companies possess substantial global influence, which they are motivated to expand. Some, as a result, felt the wealth generated by advanced AI systems will not trickle down to the public that partially funded their development (through endowments and grants) and will shoulder the burden of employment shift.

Our complete paper describes interviewees’ additional concerns over the abusive work environments in which training data is labeled and the environmental effects of developing and using massive AI systems.

4. What Next?

Despite their deep considerations of risks, the experts did not have clear proposals for AI’s future. Though none believed AI should be fully entrusted to the free market, they varied significantly on what degree of regulation is wise. However, there was general consensus for a break from the relentless pace in the AI field. While advancements in technology often bring about positive change, our interviewees urged for a greater emphasis on considering potential risks and wider implications. Each expert we spoke with advocated for an unprecedented shift in the culture of computing: slow down, forget the tech for a moment, and consider its context.

For a more detailed exploration of this topic, check out our extended article here: https://arxiv.org/abs/2306.00891v1

5. References

Anderson, C. (2023, April 24). personal.

Anonymous signatories. (n.d.). personal.

Baeza-Yates, R. (2023, April 20). personal.

Barto, A., & Struckman, I. (2023, April 27). personal.

Bentley, B. (2023, April 21). Personal.

Bernardin, A., & Kupiec, S. (2023, April 26). personal.

Edwards, J. S. (2023, April 21). personal.

Future of Life Institute. (2023, March 22). “Pause Giant AI Experiments: An Open Letter.” Future of Life Institute. futureoflife.org/open-letter/pause-giant-ai-experiments/

Isola, P., & Struckman, I. (2023, May 8). personal.

Kelly, J. (2023, April 3). “Goldman Sachs predicts 300 million jobs will be lost or degraded by Artificial Intelligence.” Forbes. https://www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will -be-lost-or-degraded-by-artificial-intelligence/?sh=523823aa782b

Koppel, J., & Struckman, I. (2023, April 21). personal.

Kuipers, B., & Kupiec, S. (2023, March 23). personal.

Kuo, Y.-T., & Struckman, I. (2023, May 4). personal.

Kwon, J., & Struckman, I. (2023, April 26). Personal.

Anonymous & Struckman, I. (2023, April 24). personal.

Mendelsohn, S. (2023, April 20). personal.

Perilli, A., & Struckman, I. (2023, April 24). Personal.

Petersen, S., & Kupiec, S. (2023, May 4). Personal.

Reiner, P., & Struckman, I. (2023, April 26). personal.

Rojas, C., & Kupiec, S. (2023, May 2). personal.

Rosman, B., & Kupiec, S. (2023, April 26). personal.

Saffiotti, A. (2023, April 20). personal.

Shneiderman, B. (2023, May 3). Personal.

Struckman, I., & Kupiec, S. (2023). Why They’re Worried: Examining Experts’ Motivations for Signing the ‘Pause Letter’. arXiv preprint arXiv:2306.00891.

Tenka, S., & Kupiec, S. (2023, April 29). personal.

Tiwary, A. (2023, May 03). personal.

Vardi, M., & Struckman, I. (2023, April 28). personal.

Winfield, A., & Struckman, I. (2023, April 25). personal.

 

Isabella Struckman is a fourth-year student at the Massachusetts Institute of Technology studying Artificial Intelligence & Decision Making and Ethics. Sofie Kupiec recently graduated from the Massachusetts Institute of Technology with a degree in Computer Science, Economics, and Data Science.


No entries found



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

WP Twitter Auto Publish Powered By : XYZScripts.com