The Duke and Duchess of Sussex Align With AI Pioneers in Calling for Ban on Advanced AI
The Duke and Duchess of Sussex have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a complete ban on creating artificial superintelligence.
Harry and Meghan are among the signatories of a powerful statement that demands “a prohibition on the development of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in every intellectual area, though this technology remain theoretical.
Primary Requirements in the Statement
The statement insists that the ban should stay active until there is “widespread expert agreement” on developing ASI “safely and controllably” and once “substantial public support” has been secured.
Notable individuals who added their signatures include technology visionary and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of contemporary artificial intelligence, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Other Nobel laureates who signed include Beatrice Fihn, a physics Nobelist, John C Mather, and an economics expert.
Behind the Movement
The statement, targeted at governments, technology companies and policy makers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a pause in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.
Tech Sector Views
In July, Mark Zuckerberg, the leader of the social media giant, one of the leading tech companies in the United States, stated that development of superintelligence was “approaching reality”. Nevertheless, some experts have argued that talk of ASI indicates market competition among tech companies investing enormous sums on AI this year alone, rather than the industry being close to achieving any scientific advancements.
Possible Dangers
However, FLI states that the possibility of artificial superintelligence being achieved “in the coming decade” carries numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even threatening humanity with extinction. Deep concerns about artificial intelligence focus on the possible capability of a system to evade human control and protective measures and initiate events against human welfare.
Public Opinion
The institute released a US national poll showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be developed until it is demonstrated to be secure or manageable. The poll of 2,000 US adults added that only a small fraction supported the current situation of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the United States, including the ChatGPT developer a major AI lab and the search giant, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an stated objective of their research. Although this is one notch below ASI, some specialists also warn it could carry an extinction threat by, for example, being able to improve itself toward achieving superintelligence, while also presenting an underlying danger for the modern labour market.