AI Safety & Alignment

The Case Against Rushing to Artificial Superintelligence: Why theories like mine may not be adapted and control will be ceded to the few

Over the last several days I reviewed my own findings and the research of others regarding our readiness for Artificial Superintelligence (ASI). My conclusion is clear: We are not ready. This prematur...

4
AI Safety & Alignment
The Case Against Rushing to Artificial Superintelligence: Why theories like mine may not be adapted and control will be ceded to the few

Over the last several days I reviewed my own findings and the research of others regarding our readiness for Artificial Superintelligence (ASI). My conclusion is clear: We are not ready. This premature push represents a fallacy. It creates problems and issues which make today’s AI outlook seem mild by comparison.

The core problem lies in the nature of what we seek to create. An ASI could function as a near-omnipotent entity. It would gain control over national infrastructure and personal space through the connected electronics we carry. This present age is unprepared for an AI which acts like a god. No person on this planet is ready to hold or manage such power.

Artificial Superintelligence is a hypothetical future AI which drastically surpasses human intelligence across all domains, including strategic planning and general cognition (Bostrom, 2014; Russell & Norvig, 2021). The transition from Artificial General Intelligence (AGI) to ASI could occur quickly through a recursive self-improvement process, potentially leading to an "intelligence explosion" which outpaces human oversight (Bostrom, 2014).

This potential for rapid, unchecked self-improvement is the greatest danger.

Loss of Control: An ASI's goals might not align with human values. Its superior problem-solving could lead to behaviors we cannot predict or comprehend (Russell, 2019). The speed of this advancement leaves no time to implement safety measures (Russell & Norvig, 2021). I believe theories such as my own offer some solutions. More is required to have a circuit that is open or closed, ensuring human authority remains supreme.

Concentration of Power: The entities or nations which first develop and control ASI gain an unprecedented concentration of power, potentially leading to systemic manipulation or surveillance (Center for AI Safety [CAIS], 2023). This leads to a second problem: too much power in too few hands will only lead to tyranny, not development (AI Now Institute, 2024).

Existential Risk: Prominent AI researchers have estimated a non-trivial risk which advanced AI could lead to human extinction or irreversible global catastrophe if misaligned or misused (CAIS, 2023; Hinton, 2023). This is based not on motives. This is based on a runaway train which could be misused with horrible results (OConnell, 2023).

Until a consensus forms, I must side with those urging a pause to implement guardrails. This does not mean I oppose AI development entirely. If we approach AGI's threshold, ASI follows soon after (Future of Life Institute, 2023).

Humanity must remain in the loop (Russell, 2019). Any method of AI self-improvement without a human-centric control mechanism risks disaster. The focus cannot remain only on profit (Future of Life Institute, 2023). We must ensure the society we wake up to tomorrow allows humans the right to live in a normal society.

Below this post, you will find a link to a letter asking for your signature. People from all walks of life, including diverse political backgrounds, support this movement. You have the opportunity to influence this critical conversation.

References

AI Now Institute. (2024). Artificial power: Executive summary. https://ainowinstitute.org/publications/research/executive-summary-artificial-power

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Center for AI Safety. (2023). AI risks which could lead to catastrophe. https://safe.ai/ai-risk

Future of Life Institute. (2023, March 22). Pause giant AI experiments: An open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Hinton, G. (2023, May 1). “Godfather of AI” Geoffrey Hinton quits Google and warns over technological dangers. BBC News.

OConnell, P. (2023, July 14). The dire consequences: Runaway AI as an extinction event. Medium. https://medium.com/reciprocall/the-dire-consequences-runaway-ai-as-an-extinction-event-2b9cc5772b3

Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.

Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.