Over the past few months I’ve been spending quite a bit of time collaborating on how the state should be reform to address science and tech, learning from my time in Number Ten Downing Street and the science department.
One thing I worked on was a joint paper between Tony Blair and William Hague. My earlier blog on it is here:
The paper laid out a series of major reforms. For international readers: Hague was leader of the opposition party when Blair was prime minister, so its somewhat like McCain and Obama coming together to call for a major science/tech agenda.
Today we have launched a successor to that paper, again with Blair and Hague, focussing specifically on AI. Its covered the front page of The Times, which reads:
AI revolution risks leaving Britain behind, warn Blair and Hague
”Britain failed to foresee the AI revolution and ministers need to sack their advisers or the country risks becoming “irrelevant” in the field, Sir Tony Blair and Lord Hague of Richmond have warned.
In a highly critical report, they say the government “failed to anticipate the trajectory of progress”, adding: “If the country does not up its game quickly, there is a risk of never catching up.”
Blair and Hague, who once faced each other across the dispatch box, come together to recommend reshaping the state and ramping up AI funding “on a scale of HS2”. Only then can Britain use its expertise and global position to lead on development and regulation of the technology, they argue.
The pair say that the government’s advisers on the AI Council and at the Alan Turing Institute should be replaced, adding: “The Alan Turing Institute has demonstrably not kept the UK at the cutting edge of international AI developments.”
They recommend giving extra powers to the new AI Taskforce — modelled on the vaccine unit — being set up by the government and having it report directly to the prime minister. “It must be shielded from typical Whitehall processes,” adds the report, A New National Purpose: AI Promises a World-Leading Future of Britain.”
“AI’s unpredictable development, the rate of change and its ever-increasing power mean its arrival could present the most substantial policy challenge ever faced, for which the state’s existing approaches and channels are poorly configured,” they conclude.
At the heart of their proposals is a new nationally funded AI lab, which they name Sentinel, akin to the European Organisation for Nuclear Research (Cern)…….
Full article here (paywalled)
I recently had an op-ed in the Sun on Sunday explaining for a general audience why this is so crucial to get right [link].
Below is a tweet thread I put out on this new report, which you can read here.
Here is a tweet thread summary I wrote, from report. If you agree, please retweet the original tweet!
TWEET THREAD, images from report.
The UK needs a new national purpose, centered on harnessing science and technology. Nowhere is this more relevant than for AI. In this new bipartisan report with Tony Blair and William Hague we suggest how this should be done with much increased scale, ambition, and speed.
This report was a collaboration with lead authors @benedictcooney, @LukeWStanley, @Tom_Westgarth15, and myself. Other contributors include Pete Furlong, Melanie Garson, Kirsty Innes, Alexander Iosad, Oliver Large, John-Clark Levin, and Kevin Zandermann.
We previously laid out how the UK needs a new national purpose of harnessing science and technology for public benefit, including major reforms to the state, much increased investment, reform and reinvention of scientific institutions, and new approaches to procuring and deploying tech.
In this new report, we show how this agenda and framing can be applied to Artificial Intelligence. We highlighted how vital AI would be in our first report. But the messages and policies of our first report have only become more urgent and important since then.
As Tony and William say in their introductory text, this agenda and AI specifically is ‘a matter becoming so urgent and important that how we respond is more likely than anything else to determine Britain’s future.’
We first highlight how the rate of change, AI’s unpredictable trajectory, the expertise required, and its likely power and transformative potential pose unprecedented challenges to governments and typical government processes built for a different era. Governments are not configured for this.
We highlight how ‘If harnessing science and technology should be the UK’s new national purpose, then creating a path to prosperous, free and safe AI must be the highest priority within that agenda.’ However, we warn the UK’s window of opportunity is rapidly narrowing.
This is a historic opportunity for leadership, and we need a plan.
We address the solution in 3 stages. First, we highlight the reforms to the state needed. Second, we argue that the UK must lead on interpretable and safe algorithms. Third, we outline how to better deploy this in public services. Here are a subset of our recommendations.
To reform the state, we first need to recognise that the UK must make difficult decisions reprioritising its capital expenditure to “to find additional resources on the scale of HS2 or Labour’s Green Transformation Fund to be a serious global player”
We previously argued that Number Ten needs a science and technology unit. Events since then have only served to confirm how central this agenda will be for the UK, and how vital it is the PM’s core and trusted advisory pool in number ten has technical experts.
We highlight that existing governments advisory channels and systems for AI have failed for AI. Whilst industry figures have warned in private of the rapid pace of change, the government has been blindsided. Existing institutions have not oriented to this agenda effectively.
Some of us inside Number Ten 2020-2022 tried to warn that this moment was coming and the UK should get ahead of the curve. We even had a dedicated AI advisor. But we were told our views were ‘outliers’ and the AI strategy was essentially unfunded at the 2021 spending review.
We highlight how there is a need to turn to a new generation of researchers for advice given the major paradigm changes that have occurred, and abolish the AI council which was built for a different era.
Government works on the basis of long term plans and point predictions about progress, treating R&D as it does building a hospital or a train line. This will not work in the uncertain, rapidly changing environment of AI. Government must embrace foresight and flexibility.
The Prime Minister needs to lead an effort to reorient the state toward this, including with reforms to the Treasury.
John-Clark Levin was especially critical to this section of the report which, from my time in number ten, I think is especially crucial thing to get right. The state as configured presently cannot deal with the situation it faces. There is much more in the report.
The Prime Minister has rightly created an AI Task Force, otherwise known as Foundation Model Taskforce. But to actually learn from the Vaccines Taskforce, the Prime Minister will need to empower it. Otherwise it will be ‘VTF-in-name-only’, a business as usual endeavour.
I was there at the setup of the VTF, and saw how difficult it was to get it setup with the freedoms needed.
There are many strengths to the civil service, but it has faced challenges in dealing with the technology revolution and COVID19. VTF lead Kate Bingham has spoken strongly of the limitations of existing science and technical skills inside the Civil Service.
The Cabinet Secretary Simon Case has agreed with this.
Therefore the Prime Minister will need to have the AI Task Force reporting directly to him, bypassing vetocracy, being able to unblock problems, and allowing it to hire a technically exceptional team.
The AI Taskforce budget total is currently 1/10th of DeepMinds annual budget. It must be elevated, and it must have freedoms to work at pace. Protecting it from Treasury micromanagement is especially critical. This is key for it to be transformative.
In our second section we argue the UK should seek to be known as a pioneer of safe and interpretable forms of AI. This section has aged well since it was originally drafted, which builds on what we said in our original report.
The current trajectory of fully private sector development is unsustainable, for reasons we explore. Rather, we need new models of public-private sector interaction on cutting edge AI systems. There are some key issues in our current trajectory.
We need to sharply improve the structures supporting AI research in the UK. We have a range of policies from new ‘polymath fellowships’ allowing technical experts in other fields to become deeply AI literate, and vice versa. We also need to expand other training programs.
@ARIA_research, the new ARPA-like research organisation, is ideally suited to rapidly orient to the AI revolution and bring its benefits across disciplines in an interdisciplinary way so key for AI. But it needs a much bigger budget by the end of the next parliament to be a major player. ARIA needs to be funded to a level of at least £2bn annually by the end of the next parliament.
As previously argued, the Turing Institute’s AI function has not kept us at the cutting edge. The Turing Institute’s core AI focus should be wound down and focus on Digital Twins, whilst a new effort is launched.
See for example @martingoodson ’s piece, and my co-authored argument from march, on why Turing AI function should be wound down and a new effort launched. The recommendation was not made lightly, but is based on years of conversations with frontier tech experts and researchers.
https://rssdsaisection.substack.com/p/the-alan-turing-institute-has-failed
https://jameswphillips.substack.com/p/securing-liberal-democratic-control
It pains me to criticise a well-intentioned effort, but a key issue is that the home of government AI not being relevant to cutting edge AI causes big issues inside the system, as that’s the ‘trusted’ source of advice. In my experience this view is widely held in private including at senior levels.
But the UK must not abandon the notion of a public sector AI lab. A world with superintelligent systems solely understood controlled by private actors is not viable. Rather, the UK needs a lab oriented and setup to deal with the future.
Create AI Sentinel: We argue in the paper for the UK to seed an international laboratory network, which we call AI Sentinel. It should be focussed on 3 areas to complement and collaborate with the private sector.
First, to develop and deploy methods to interrogate and interpret advanced AI systems for safety, while devising regulatory approaches in tandem. It both researches and deploys safety tech, acting as a ‘test bed’ for AI.
Secondly, to ensure sovereign states and their regulators can understand the latest AI systems. Notably, it is not aiming to push capability development beyond the frontier absent safety improvements, which would accelerate an unsafe race.
It also not intended to compete directly with private companies. For example, we do not suggest creating a ‘BritGPT’ in the paper, but rather lay a recipe for creating an ecosystem that creates the safer next generation of technology, rather than play catch up.
Thirdly, Sentinel should act to promote a plurality of research endeavours and approaches to AI, especially in new algorithms that are more interpretable and controllable. Current incentives in the private sector favour pushing capabilities of black box algorithms faster, not improving legibility.
We provide greater detail on the motivations and requirements for AI Sentinel in the paper, including it being led by a frontier tech expert, and being open to international partners from the start.
A key point we make is that regulation is inseparable from research at present, and any intelligent regulation effort will need to be unusually closely joined with research. This is a key motivation for creating AI Sentinel.
We also call for government, thru AI Sentinel, to bring some of the worlds brightest minds across fields to work in AI Safety, inspired by how figures like John Von Neumann were brought into government during WW2. Currently there is a misallocation between capability/safety.
On regulation, we emphasise that this must be closely to coupled to AI Sentinel and other’s research. We call for a divergence from EU rules and standards, but close engagement. We outline a range of measures to achieve this. Much more in the report.
Section 3 shifts gear. The above sections all focus on laying a platform from which the UK can be a leader in deploying safe, interpretable AI to transform public services and the economy. Achieving this is what section 3 of the report covers.
We need to create new kinds of interdisciplinary research labs working at the intersection of science and engineering. This calls for ‘Lovelace Disruptive innovation laboratories’.
These could work at the intersection of AI and other chosen fields, applying AI’s benefits to those fields to address major challenges.
@worrydream ‘s work on creating physical computing is an example of what such a laboratory could do. He turns entire rooms into communal computers where people can co-create and learn in the physical world without the need for screens or virtual-reality glasses..
https://www.phenomenalworld.org/analysis/the-next-big-thing-is-a-room/
Much more detail in our papers and @Rob__Miller and @DrEoinOSullivan ‘s work, and in this tweet of mine:https://twitter.com/AnEmergentI/status/1587420330750087169?s=20
We then outline how the government should approach integrating AI into public services, beginning with known technology to build expertise, then moing to more speculative approaches.
We need to upskill departments too, with Chief AI officers able to help departments understand AI, and use their technical skills to interact with the Office for AI and the AI Taskforce.
We also discuss how to address safety in the context of public services, such as having AI Safety incident registers.
To realise AI’s benefits, the UK needs markedly increased compute power. The recent future of compute review was the first time a country has taken a comprehensive view of its compute.
But even if fully implemented by its 2026 delivery date, it leaves the entire UK state far behind the frontier even for relatively small US labs like Anthropic and OpenAI in 2022. We address how such compute should be governed to promote responsible use.
The UK’s semiconductor strategy was devised in an era of lower investment and before AI became frontpage news. The UK needs to urgently reassess its long-term strategy for semicompute, with a rapid review led by a technical expert. We have promising companies, but a lack of support.
We outline a range of recommendations on making the country ‘AI legible’ so AI can improve public services, whilst protecting data privacy. This includes funding dataset creation, and major national programs to implement AI in these AI legible sectors like grid optimisation.
To spur the private sector and utilise its strengths, we need much improved procurement. AI is too illegible, fast paced, and technical for current procurement approaches. We repeat our earlier call for a ‘DARPA of procurement’, acting as a buyer and tester of first resort for AI products.
We also urge organisations like UKRI to prioritise toward physical AI through robotics research, and for challenge funding in this area. We outline how we should begin to approach deep fakes and labour market disruption, addressing the democratic deficit and much more.
This thread is far from comprehensive despite its length - please read the report, share it, and request your MPs take it seriously and lobby for it to become government policy.
I wrote a piece recently in the Sun on Sunday explaining why getting AI right is so critical to the future of our country and the world.
https://twitter.com/AnEmergentI/status/1665318794397462528?s=20