Discover more from James W. Phillips' Newsletter
Securing Liberal Democratic Control of AGI through UK Leadership
This was originally a piece co-written to influence policy makers, and had input from a range of people including senior frontier industry figures, former senior government advisers, and others who share the concerns raised in this piece. This is being published openly now in updated form given recent developments in the AI space.
Update: We received many very helpful and thoughtful responses to this paper. We have written a follow up response [click here to read] clarifying some points where we were not clear, and responding to some common concerns. Jack Clark, a co-founder of the leading company AnthropicAI which is working to develop AGI, wrote about about here in his influential newsletter.
Update Oct 2023: Nitarshan Rajkumar, who now works in the AI Taskforce in government, was one of the co-authors of this piece, and deserves more credit for it than I do.
Image generated by DALLE-2 AI.
Thanks for reading James W. Phillips' Newsletter! Subscribe for free to receive new posts and support my work.
Within this decade, we may build Artificial General Intelligence (AGI) – AI capable of performing most cognitive labour a human can do. Such a development would have an unprecedented effect on our society; 'agentic' forms of AGI may also pose an existential threat to our security. The current development path towards AGI is inherently unsafe.
The UK is in a unique position to alter this path in alignment with our values and for our benefit. However, this advantage has been squandered for a decade, and is now rapidly evaporating under an unsafe 'race to the bottom' dynamic between private companies funded by US tech monopolies.
Ensuring that AGI is developed safely and in the interests of the British people and liberal democracies must be the highest priority of the British state over the next decade. We propose this should be done through pursuing a multilateral approach to advancing and controlling AGI in partnership with our companies and liberal democratic allies. This should begin with creating a commercially connected elite public AGI lab under leadership of a frontier tech industry expert.
There is a brief window over the next two years in which rapid action is required to provide any chance of success. Specifically, this requires that we:
Procure national AI supercomputing infrastructure comparable to leading US private labs.
Create an advisory group of frontier tech, not legacy academic, expertise to identify major AI research projects to run on this infrastructure.
Grow an elite public-sector research lab, led by a leader with the technical skills and entrepreneurial expertise, to build a research agenda at the frontier of AI.
We invest almost £20 billion per year in R&D - a modest fraction of this must immediately be diverted to a national effort toward frontier AGI leadership.
1. Accelerating progress towards AGI makes this decade the most important in our history.
Advances in AI capabilities are now exponential. A decade ago, AI could barely recognise a photo of a cat. Today, Bing’s chatbot is capable of a wide range of tasks including searching the internet to write and edit essays and code computer programs, whilst AI is superhuman at many narrowly defined tasks including those involved in strategy such as the game Diplomacy. The release of GPT-4 today will provide further surprise.
In private, leading figures believe that within another 5-10 years we will be able to build AI capable of performing almost all cognitive labour a human can currently do, and doing most of it far beyond human ability. Such an Artificial General Intelligence (AGI) would likely alter society unlike any advance before in human history, and will represent a major strategic surprise to the UK.
Today's narrow AI systems have already shown immense economic and social potential to improve our lives. They have shown the potential to replace a large amount of search (a market of ~£350bn/year), pass a medical licence exam, pass a bar exam, write half of all code, and write, illustrate, and voice movies. They have increased writing productivity by 50%, increased coding productivity by 55%, and research from MIT suggests that the broad adoption of such systems could triple national productivity growth rates. We believe these benefits will continue and affect nearly every part of the economy on the path towards AGI.
But there are also serious risks. AGI would present a novel and existential threat if it takes on 'agentic' forms that are capable of long-term planning and an ability to autonomously execute on goals - ie, having agency. Such systems could be uninterpretable to humans, and we would lack the means to verify that their objectives are not extremely misaligned with our intentions and values. There are currently no good theories for how to keep a superintelligent AI system aligned with human interests. Within hours of being released, ChatGPT and Bing’s ‘safety features’ were broken and the entities were threatening humans, providing instructions to produce bombs, and producing plans to take over the world via manipulating humans. Once able to act independently, advanced AI entities could resemble a cyber weapon that could pose species existential risk. Put simply: creating an entity far more intelligent than yourself may be extremely dangerous, and nobody knows how to mitigate this risk.
Even if AGI can be safely aligned to our values and controlled, the UK also has substantial vulnerabilities from such a development relative to other nations. We are highly dependent on services and creative exports, areas that are already being disrupted by generative AI and will be the most easily automatable under AGI. Our small geographic size relative to our population makes us less able to rely on automated manufacturing and natural resource extraction, which will become more important for national prosperity in relation to skilled human labour once AGI exists.
Accordingly, AGI presents a major strategic challenge for the United Kingdom unlike any before it, exceeding even the development of control over nuclear technology.
2. The current development path toward AGI is unregulated, unaccountable, and unsafe.
Advanced AI development is completely unregulated at present, and is entirely controlled by US corporate actors, while governments lack significant expertise or understanding of this technology and where it is heading. There are multiple concerning developments:
Competitive Pressures. There is clear economic incentive to build advanced AI systems leading to AGI. In particular, there is a strong incentive to remove any presence of human oversight in such systems due to the costs this introduces, and the disadvantage in competition with other companies or nations that fully automate. Given its international nature we cannot regulate to stop this unilaterally.
Lack of Government Oversight. There is currently absolutely no oversight or government role in the development of the most advanced AI systems. This would be akin to having let private actors develop and possess nuclear weapons and energy without any regulation or control over what is clearly an immensely powerful dual-use technology. It is not just a matter of willing regulation into existence however, the only expertise of such systems lies within frontier research labs, with no comparable expertise within government itself or public-sector labs. It is not under liberal democratic control and oversight, and no private actor pursuing AGI is entirely UK controlled.
Technical difficulty of AGI Control. It is also not a matter of willing safe solutions to clearly defined problems into existence. Developing safe systems is extremely difficult (e.g. Bing GPT threatening to harm users), and we currently do not even know what the "right" technical thing to do is, nor are there promising ideas.
The UK’s position and opportunity
3. The UK has the ability to steer the development of AI towards advantageous paths.
Largely due to Google-owned DeepMind, the UK is the only liberal democratic country besides the US that has the capability to lead in the path toward AGI. In San Francisco, labs such as OpenAI and Anthropic are explicitly pushing towards this goal. DeepMind is likely the only comparable research lab in quality, and is a significant attractor of global AI talent to London. No other country has anything close to the talent density of these cities and organisations. A unique advantage for the UK is that, unlike in Brussels or Washington, London benefits from the co-location of technical and government expertise. The UK's governmental structure is in principle far more nimble than the US and EU as well, centralised in a way that enables rapid action if the political will and talent is present. The UK must capitalise on this historic opportunity and better leverage the fact it hosts a world leader in this critical technology.
4. Our existing research leadership and institutions have squandered this ability for a decade.
Major R&D advances in AI are largely coming from a new generation and paradigm of researchers that are not represented in the professorial class in UK institutions, but rather exist in tech monopoly funded labs. This is a result of the paradigm shift of approach over the past decade, where a previously fringe approach (neural networks) has become mainstream and highly commercializable.
The UK professorial class’ expectations and predictions about AI development have been wrong consistently for over a decade. For example, even the most recent University of Cambridge AI strategy makes absolutely no mention of AGI, Large Language Models, AI Safety, and other cutting edge topics. The professorial class is composed of researchers from the prior paradigm, which often views current AI progress with a mix of suspicion and skepticism. The AI Council and Council for Science and Technology have for years failed to warn the highest levels of government of the pace of progress in AI and its implications, and lack many figures working meaningfully close to the frontier of AI. Institutions like the Turing Institute suffer from similar problems, led by very senior academic figures without experience at the cutting edge of AI, and its governance is not set up properly. For these reasons and others, UKRI is also not capable of addressing this challenge.
Collectively, this class has failed to keep sovereign capabilities at the cutting edge of artificial intelligence. They have failed to adapt to the new paradigm, with funding allocations controlled by the professors from the prior paradigm, they lack sufficient scale or concentration of resources to compete, and they cannot compete for global talent. As previously highlighted, “without DeepMind the UK’s share of the citations amongst the top 100 recent AI papers drops from 7.84% to just 1.86%”, neck and neck with Hong Kong and Switzerland.
In summary, public-sector leadership and institutions have repeatedly underemphasized progress and risks from AGI, and have ignored warnings from leading private labs for years. These have all been massively wasted opportunities, and reliance on them is now a major strategic risk as pathways to AGI materialize outside of our control.
5. Our advantage is evaporating in the current competitive environment.
Whilst the UK has a leadership role through DeepMind, this advantage is a) dependent on a US tech giant's funding and b) diminishing as competition grows globally.
In the coming years, experts believe AI companies are going to move from spending tens of millions of pounds on training single AI models to hundreds of millions or billions of pounds - if the UK doesn't act now, it will have no hope of leading the future. OpenAI has just launched GPT-4, a tens-of-million dollar model which will further fuel the resource-intensive commercial AI race set off by ChatGPT and Bing. If we don't act now, we will likely lose our advantage as we lack the resources to sustain a competition with these companies, or the US and China, and possibly even the EU.
Continuing our existing approach would be a deliberate choice for the UK state to head towards strategic irrelevance in AI. Whilst we cannot stop AGI development unilaterally, we must ensure we and allied liberal democracies are in a position to control it. The path to this is difficult, as we do not want to accelerate a race. Rather we need to begin a multilateral effort that embeds our current advantage and ensures nation states allied in values have a seat at the table.
6. We can ensure UK agency over AGI through partnership with companies and allies.
The UK needs to anchor a nation-state effort to pursue the sovereign, accountable, and safe development of Artificial General Intelligence. However, it must use a new organisational approach and talent pool, rather than fund it through existing channels that are demonstrably not capable of this task.
This would be very loosely analogous to a ‘CERN for AI’, after the high-energy physics research organization, but would also resemble the Manhattan Project in the need for a degree of information secrecy and security as well as visionary leadership with operational expertise. Whilst the European Union has considered such an endeavour, it currently lacks the talent pool and expertise that the UK has.
Actions to be taken
1. The UK must have serious AGI expertise advising the highest levels of government.
The state needs to ensure it is receiving useful advice on AGI from people actually building it, and reform existing councils to meet this task. Leadership should have monthly meetings with individuals in top frontier tech labs and others who take AGI seriously. This is critical, as without the right trusted advisors there is little chance of success; in all aspects of this endeavour, the right talent will be the most important factor. This will not come from the senior professorial class in the UK. It must come from those who share these concerns and have deep knowledge of the issues, working in frontier labs.
2. The UK must have access to the supercomputing necessary for AGI.
The basic technical methods underlying OpenAI's ChatGPT and GPT-4 are now 6 years old – the key insight in this time has been that massive amounts of compute are currently the biggest driver of progress in AI, with compute usage at the frontier now doubling every 10 months. Access to such compute is especially crucial to attract and retain the leading talent driving this progress. Therefore, progress in AI for the foreseeable future is critically reliant on massive supercomputing power.
Single models at the frontier, namely OpenAI's GPT-4 and successors, are being trained on tens of thousands of the highest specification GPUs (AI training chips) for months on end, roughly equivalent to using what is called an 'exaflop' supercomputer continuously for months. Unfortunately, the UK public-sector currently has less than 1000 such top-spec GPUs, shared across all scientific fields. This means that one private lab in California is now using at least 25x the total compute capacity available through the entire UK state, just to train a single model. Our lack of compute undermines our ability to attract the best global talent in this technology; our business ability to commercialize and deploy it; and perhaps most critically, our state soft power over international use and control of it.
AI supercomputing is now foundational infrastructure that enables all of society and, as the recent Blair/Hague report recommends, we have to invest accordingly. The Independent Review of the Future of Compute has provided recommendations towards this last week, but progress in AI has dramatically accelerated this year and the review's level of urgency and scale are no longer adequate.
The Review's first key recommendation is to purchase one single exaflop supercomputer, roughly equivalent to 30,000 GPUs, for shared use by all UK research communities (not exclusive to AI) by 2026. This leaves the entire nations' compute capacity in 2026 behind one relatively small frontier US lab in 2022. We emphasise that whatever we do procure will be very diluted versus OpenAI using it to train a single model - and it'll arrive four years after OpenAI trained its model.
Leaders such as OpenAI will only continue to increase their compute usage. Another leading lab, Anthropic, has said that a state would require 100,000 top-spec GPUs within 3 years to be competitive in this space. This is a major upscaling of ambition merely to keep pace with these organisations that are still relatively small startups. Google, Meta, Microsoft and others are all using even more, and the US will likely start building an AI supercomputing resource of 75,000 top-spec GPUs soon. As competition grows, it is a necessity that we have a sovereign supercomputing resource to enable our objectives in this space. As suggested by experts from leading AGI labs, we need to procure 100,000 top-spec GPUs for sovereign supercomputing capability dedicated to AI, for delivery ASAP.
The fastest any supercomputer could physically be procured is likely late-2024, leaving a vulnerable window of almost 2 years amidst intense and growing competition with other companies and nations. The Review's second key recommendation is for the creation of a dedicated "UK AI Research Resource" by May 2023, using 3,000 GPUs on commercial cloud. An exascale supercomputer, as used by OpenAI, has the equivalent of 30,000 GPUs. Therefore this recommendation would immediately leave the British state with only 10% of the capacity of a single model from a single private American company today, a deeply uncompetitive position.
We should aim to rent 30,000 GPUs as soon as possible, and also build out dedicated engineering resources for using these. While this is lower than needed to match what startups such as OpenAI have access to today, the scarcity of available GPUs to rent already makes this ambitious to attempt. This could be done on a 2-year contract using cloud, or in partnership with domestic firms that are already significant users of GPUs via cloud.
Our existing academic systems are unequipped to act with the urgency needed for what is now a national security issue. Similar to the Vaccine Taskforce, we have to stand up a new time-limited AI Supercomputing Taskforce, that should be led by UK frontier tech experts, and should have the remit to pursue procurement and oversee delivery of the systems recommended above.
These actions are particularly essential, as is there is no possibility that subsequent goals can be achieved unless we have truly world-leading supercomputing ability.
3. The UK must build an elite public-sector research lab at the frontier of safe AI research.
This technology is so fundamentally important that it cannot be allowed to develop solely in the private sector or abroad. As highlighted above, the current academic system in the UK is not capable of competing with frontier industry labs on this, and existing institutions are also not set up to deal with cutting edge technology development at speed.
To ensure the UK and its allies have a seat at the table in this, a national effort should be launched. Colocation of researchers is crucial, as shown by all frontier labs that use physical buildings to organise researchers. The UK needs to create a national lab to do this.
This enterprise must:
Be led by a deep technical expert, akin to a leader of a frontier industry lab, empowered with the freedom to lead an effective organisation. If this requires legislation, legislate. A business as usual public lab will fail for the same reasons the others fail, with major challenges as highlighted by the recent Nurse review and the Blair-Hague report.
Be empowered with similar freedom of action as ARIA, and ability to recruit top technical talent at competitive salaries. There is a sizeable pool of researchers who believe AGI should be developed in a sovereign way aligned with liberal democratic values. However, AGI talent is not cheap.
Be tasked with keeping the UK at the cutting edge of AGI-relevant technology, and provide a trusted source of advice on this to the British State and allied nations.
Ensure that the control of AGI, through research into safety, robustness, and reliability, is a key focus of its scientific mandate.
Be sufficiently resourced to compete at the very cutting edge, whilst having the freedom to partner with commercial actors.
Form the public-sector nucleus of a whole-of-nation effort towards AGI, including in partnership with other leading corporate actors who must share access to their resources and frontier AI models and can also contribute their expertise to the effort.
This cannot be run through existing channels, which are too dysfunctional.
Importantly, as we do not know the path to safe AGI now, this effort should not be narrowly focussed on Large Language Models, which would represent over-optimisation for what is most significant today. Rather it must be flexible and look to develop next generation capability which may emerge and surprise us.
4. The UK must initiate and lead a multilateral, liberal democratic effort to control AGI.
While this is clearly a technology we must own, under the Integrated Review's own, collaborate, access framework, it is also critical that we work on this technology in partnership with our allies. We must engage with international allies (US, CA, AU, NZ, EU, JP) who are aligned with our values but currently lack the technical and operational ability to work towards safe AGI, and offer to let them join our initiative as partners. Focus on having them provide funding for supercomputing and salaries, as well as on data collection, while ensuring Lab Directors otherwise have operational freedom in hiring and project direction. It is important that this multilateral angle be pursued as soon as possible, to avoid a zero sum technology race, and to make a multilateral partnership more palatable to the US.
5. The UK must lead in proactive and anticipatory governance of AGI
There will clearly be a need for significant regulation of advanced AI leading to AGI. We cannot afford to think through these issues as we encounter the effects of such developments. The state must spend significant resources to prepare a variety of regulatory responses in this space, in partnership with actual technical experts in this area. This should include topics such as monitoring and measurement of domestic and foreign supercomputing resources needed for AGI, a rethinking of cybersecurity measures for frontier research labs, and verification regimes to ensure that all actors, whether allied or not, are adhering to norms around development of this technology.
Thanks for reading James W. Phillips' Newsletter! Subscribe for free to receive new posts and support my work.