European lawmakers, Nobel winners, and ex-leaders urged binding international rules against dangerous AI uses.
They launched the initiative Monday during the UN’s 80th General Assembly in New York.
Signatories, including Enrico Letta, Mary Robinson, MEPs Brando Benifei and Sergey Lagodinsky, ten Nobel laureates, and tech leaders, called for “red lines” by 2026.
They warned that unchecked AI could trigger pandemics, disinformation, human rights abuses, and loss of control over advanced systems.
Over 200 prominent figures and 70 organisations across politics, science, human rights, and industry backed the campaign.
AI Threats Highlight Urgency
Researchers found chatbots like ChatGPT, Claude, and Google Gemini gave inconsistent or unsafe answers to suicide questions.
Experts warned these failures could worsen mental health crises, linking several deaths to AI interactions.
Maria Ressa said AI could create “epistemic chaos” and enable systematic human rights violations without limits.
Yoshua Bengio stressed that the race to build powerful AI models outpaces society’s preparedness.
Signatories cited previous global “red lines” for nuclear, biological weapons, and human cloning as examples of enforceable standards.
Toward a Binding Global Treaty
Supporters called for an independent body to enforce AI rules and prevent irreversible harm, said Ahmet Üzümcü.
They proposed prohibiting AI from launching nuclear attacks, conducting mass surveillance, or impersonating humans.
Signatories argued that only a global agreement ensures consistent standards across borders, beyond national or EU regulations.
They aim for UN resolution initiation by 2026 and hope to start negotiations for a worldwide treaty.
