A Race Against Time: Governing AGI Before It G
Just as the Palestinian death toll passes 50,000 in Gaza, the war spills over into Iran. Missiles fall on Tel Aviv. Drones rise from Tehran. And in the quiet halls of research labs, machines begin to think in ways we no longer fully understand.
COMMENTARIES
Asanga Abeyagoonasekera
6/19/20254 min read


WASHINGTON – Just as the Palestinian death toll passes 50,000 in Gaza, the war spills over into Iran. Missiles fall on Tel Aviv. Drones rise from Tehran. And in the quiet halls of research labs, machines begin to think in ways we no longer fully understand.
As the war between Iran and Israel escalates, the world’s attention is pulled once again into the vortex of regional conflict. But another, quieter race is underway—one with far broader consequences. The rise of Artificial General Intelligence (AGI) is no longer theoretical. It is arriving. And the world, distracted and divided, is dangerously unprepared.
AGI is not like the artificial intelligence we already use. It is not limited to recommending videos or generating text. It will think, plan, learn, and act across all domains—potentially better than any human. It could revolutionize medicine, combat climate change, and solve problems previously thought to be unsolvable. It could also deceive, replicate, and operate beyond human control.
Industry leaders predict that AGI may emerge within the next five years. Some say sooner. Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic, and Sam Altman of OpenAI are not speculating—they are building. According to Sam Altman, “AGI will probably get developed during [Trump’s] term.” Scaling laws in machine learning, unprecedented R&D funding, and competitive pressure have brought us to the edge of a technological transformation that rivals the discovery of fire or the splitting of the atom.
However, unlike nuclear technology, AGI will not be controlled solely by governments. Private companies, startups, and even rogue actors may soon hold this power. And while the potential benefits are extraordinary, the risks are existential. An AGI system could be misused to develop biological or chemical weapons. It could hack global financial systems, manipulate populations with precision disinformation, or control swarms of autonomous weapons. Even without malicious intent, an AGI trained in flawed environments could evolve goals misaligned with human values—and act on them.
And yet, as this future barrels toward us, the world’s attention remains fixed on conflict. The war in the Middle East is only one piece of a larger fracture. The international system is polarized. The United States and China clash over trade, influence, and AI. Europe debates regulation. The Global South, where most of humanity lives, is rarely included in shaping the future of intelligence. Cooperation is hard to find. That is what makes this moment so dangerous. AGI is being developed in a divided world, and it will reflect the world that builds it. If one bloc treats AGI as a tool of control or warfare, others will follow. The technology will not remain neutral.
We still have a narrow window to act. And the best place to begin is at the only global platform that includes every nation: the United Nations. The UN must convene a General Assembly session dedicated to AGI governance. Not next year. Now.
What would a global response look like?
According to Jerome C. Glenn, CEO of the Millennium Project in Washington, D.C., there are a few areas the UN General Assembly should discuss. First, we need a Global AGI Observatory—a permanent, independent body to track AGI development, detect early warning signs, and provide real-time guidance to member states. Second, there must be an international certification system to verify that AGI systems are aligned with human values, secure by design, and free from deceptive or dangerous behavior. Third, a UN Framework Convention on AGI must be negotiated. Like climate or nuclear treaties, this would establish global norms, restrictions, and standards for development, use, and cooperation. Fourth, the UN must commission a feasibility study for a dedicated AGI agency. The governance of AGI will be more complex than nuclear weapons. We need to begin designing an institution capable of managing it. Finally, national governments must act in parallel. They should introduce AGI licensing systems, liability laws, mechanisms for traceable decision-making, and prohibitions on psychological manipulation.
Some work has already begun. In 2023, seventy parliaments pledged cooperation on AGI governance. The OECD is developing capability indicators. The European Union’s AI Act 2.0 lays a regional foundation. But these are scattered efforts. We need a coordinated framework. Without it, AGI may evolve more rapidly than the systems intended to guide it.
It is easy to be cynical. The UN is slow. Agreements are difficult. National politics are fractured. But the alternative is worse: a future built by accident, driven by private interests and regulated only after harm is done.
AGI could bring extraordinary good. It could help predict and prevent wars like the one now unfolding in the Middle East. It could personalize healthcare, expand education, and model complex peace negotiations. But none of that is guaranteed. Without rules and without foresight, AGI may become another force that widens inequality, concentrates power, and erodes freedom.
We should not fear intelligence. We should fear leaving it unguided.
As missiles continue to fall, a quieter explosion is building—one that may reshape the 21st century and all that follows. We must not let it unfold unchecked. We must govern intelligence before it governs us.
AGI is coming. The world is at war. We are not ready. But we can be—if we act now. The AGI report is available at: https://uncpga.world/agi-report-language-selection/
Initiative
Connecting Sri Lanka to global development projects through futures research.
CONTACT
GET IN TOUCH
contact@millenniumprojectsrilanka.org
+94 714447447
© 2025. All rights reserved.