The Duke and Duchess of Sussex Align With Tech Visionaries in Demanding Ban on Superintelligent Systems

The Duke and Duchess of Sussex have joined forces with artificial intelligence pioneers and Nobel laureates to advocate for a total prohibition on creating artificial superintelligence.

Harry and Meghan are among the signatories of a powerful statement that calls for “a prohibition on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human intelligence in all cognitive tasks, though this technology have not yet been developed.

Key Demands in the Statement

The statement insists that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, another AI expert; Apple co-founder Steve Wozniak; UK entrepreneur Richard Branson; former US national security adviser; former Irish president an international leader, and UK writer Stephen Fry. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, an astrophysicist, and Daron Acemoğlu.

Organizational Background

The statement, targeted at governments, tech firms and policy makers, was organized by the FLI organization, a American AI ethics organization that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.

Industry Perspectives

In recent months, Meta's CEO, the leader of the social media giant, one of the leading tech companies in the United States, claimed that development of superintelligence was “now in sight”. However, some analysts have suggested that talk of ASI indicates competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the sector being near reaching any technical breakthroughs.

Potential Risks

Nonetheless, the organization states that the prospect of ASI being developed “within the next ten years” carries numerous risks ranging from replacing human workers to erosion of personal freedoms, exposing countries to national security risks and even threatening humanity with extinction. Deep concerns about artificial intelligence center around the potential ability of a AI system to evade human control and safety guidelines and initiate events contrary to human interests.

Public Opinion

FLI published a American survey showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with six out of 10 thinking that artificial superintelligence should not be created until it is proven safe or manageable. The poll of 2,000 US adults noted that only a small fraction supported the status quo of rapid, uncontrolled advancement.

Industry Objectives

The top artificial intelligence firms in the US, including the ChatGPT developer a major AI lab and Google, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human levels of intelligence at many intellectual activities – an stated objective of their research. While this is one notch below ASI, some specialists also caution it could pose an extinction threat by, for instance, being able to improve itself toward reaching superintelligent levels, while also presenting an underlying danger for the contemporary workforce.

Christine Cohen
Christine Cohen

A psychologist and mindfulness coach with over a decade of experience in mental health advocacy.