We were lucky enough to recently meet some exceptional people during a recruitment round for ForgeFront. 

One of the questions Gavin and I asked candidates is what their vision would be for the company in 5 years. What would the ForgeFront of 2030 look like?  

Acquisition strategy, growth targets and maximising USPs were common themes.  

But something just as fundamental also came to light: in the age of AI, what does the role of a policy and futures consultancy look like? 

Much has been written about the current impact of LLMs on consultancies, and also the potential implications as AI develops further 

The Economist asks of McKinsey “what happens when AI models also start producing the kinds of alliterative three-part frameworks … senior partners so proudly present? 

In the public sector, there are big productivity gains expected from AI agents. A recent trial showed UK civil servants could save almost 2 weeks per year, through effective use of such tools.  

Of course, if you’re on the coalface of delivery – working in the emergency services or the NHS for example – AI technologies are invaluable for summarising meetings, drafting communications and filtering content.  

They will have powerful applications in the future too, from diagnosing more illnesses to proposing management priorities during complex disasters. 

But ForgeFront operates in a space where we produce and deliver policy advice, often utilising futures and foresight. 

Here we believe the situation is different. Governments receive their mandate from the electorate at the ballot box, and in turn ministers task civil servants and policy consultancies like ours to deliver this mandate. 

This creates a line of accountability: we are accountable to our government clients in the same way that the ministers we work with are accountable to the people.  

The new Hillsborough Law is one example of how important this is.  

It is difficult to envisage a future where AI can insert itself into this accountability chain. It is hard to see a world in which AI would be entrusted to deliver a public mandate. 

What’s more, at the time of writing, LLMs are not up to the task of deciding the sort of nuanced and complex decisions that need to be made in the policy sphere. These wicked problems often include the importance of human relationships or political subtleties.  

Training data cannot adequately regard these issues, while LLMs’ current lack of continual learning provides another obstacle. 

Sure, policy advisers can utilise AI to help them do the grunt work, but a lack of accountability and mandate means we are unlikely to ever fully entrust AI with policy delivery. 

Ultimately consultancies like ours are not faced with some of the same issues that might be looming for our colleagues exclusively working in the private sector. 

Consent Management Platform by Real Cookie Banner