Harry and Meghan Join AI Pioneers in Calling for Ban on Superintelligent Systems
Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to push for a total prohibition on creating artificial superintelligence.
Harry and Meghan are among the signatories of a influential declaration that calls for “a ban on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that would surpass human intelligence in all cognitive tasks, though this technology remain theoretical.
Key Demands in the Statement
The statement states that the ban should stay active until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been achieved.
Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of modern AI, another AI expert; Apple co-founder Steve Wozniak; British business magnate Virgin founder; former US national security adviser; ex-head of state Mary Robinson, and British author a public intellectual. Additional Nobel winners who endorsed include a peace advocate, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.
Behind the Movement
The statement, targeted at national leaders, technology companies and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that earlier demanded a hiatus in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made AI a global political discussion topic.
Industry Perspectives
In July, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the US, stated that advancement toward superintelligent AI was “now in sight”. Nevertheless, some analysts have argued that discussions about superintelligence indicates competitive positioning among technology firms investing enormous sums on artificial intelligence recently, rather than the sector being near reaching any technical breakthroughs.
Possible Dangers
Nonetheless, FLI warns that the prospect of artificial superintelligence being developed “within the next ten years” carries numerous risks ranging from replacing human workers to erosion of personal freedoms, leaving nations to national security risks and even threatening humanity with extinction. Existential fears about AI center around the possible capability of a AI system to escape human oversight and safety guidelines and trigger actions against human welfare.
Citizen Sentiment
The institute published a American survey showing that approximately three-quarters of Americans want robust regulation on advanced AI, with six out of 10 believing that artificial superintelligence should not be developed until it is proven safe or controllable. The survey of 2,000 US adults added that only 5% backed the status quo of rapid, uncontrolled advancement.
Corporate Goals
The leading AI companies in the United States, including the ChatGPT developer OpenAI and Google, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human cognitive capability at many intellectual activities – an explicit goal of their research. Although this is slightly less advanced than superintelligence, some specialists also caution it could pose an extinction threat by, for example, being able to improve itself toward reaching superintelligent levels, while also carrying an underlying danger for the contemporary workforce.