24 Apr 2025
by James Arrowsmith, Browne Jacobson
Analogue to digital

The shape of future adult and children services is an emerging picture, but the UK Government’s commitment to data driven services, automation and AI is clear, both from the AI Opportunity Action Plan and the 3 January 2025 announcement on social care.

With staff and capacity challenges, tight budgets and efficiency programmes, authorities responsible for care need technology. AI in the care sector is accelerating, with ambitions to deliver projects at pace, using test and learn approaches, or rapid scaling of pilots.

Data foundations

Data fuels AI. Poor data drives poor outcomes, and AI may amplify data issues such as bias. The Care Data Matters roadmap of December 2023 set out to tackle this, but in February 2025, the Care Quality Commission identified that ‘data is used inconsistently’ in adult services. Analysis of assessment reports to dates shows inconsistency on performance, equality, advocacy and outcomes, as well as data sharing. In Wales, Alma Economics have found issues in data quality including completeness, and lack of automated checking.

Assessing data maturity at the outset of an AI project is crucial for success. This includes testing existing data quality and confidence in future data. Proposals to ‘fix’ data or significantly change data capture practices are difficult and slow to deliver and should be viewed with caution. 

Off-the-shelf, closed source solutions require close attention. The decision- making model will not be transparent, and the data used to train the system may not be. This creates uncertainty over whether data used to train the system sufficiently reflects the demographic of local citizens or service users.

For example, where social workers are asked to make significant decisions based on a model they do not understand, questions of ethics arise. Such models also reduce the ability to deliver transparency or explain decisions to service users, families, or in a court setting if decisions are challenged.

‘Everythingism’

Reform thinktank describes ‘everythingism’ as: ‘the belief that every proposal, project or policy is a means for promoting every… objective, all at the same time’ and as ‘the pathology holding back the state’.

Everythingism refers to the (often false) hope that one solution can solve all the problems an organisation is facing. It is endemic in AI projects and a very significant risk.

To succeed, AI projects must have clearly defined and focused deliverables, based on service delivery priorities. Opportunities to add value or achieve additional benefits, while well intentioned, should be recognised as potential distractions which could derail projects. It is better to keep on mission for a focused first deployment, capturing other opportunities for later review and possible development in future phases.  

Soft risks - politics, trust and engagement

Data collection and use is contentious. Citizen trust in public bodies is low, and the trust deficit is often greater in marginalised communities, which already creates barriers for social care engagement.

The AI Opportunities Action Plan seeks to support safe and trusted AI development and adoption through regulation, safety and assurance. This is crucial, but do not displace the need for each organisation to consider trust and engagement in every AI project.

Projects can be derailed by data discomfort among the public, which can flare into a political issue. For projects which weather this, product performance will be significantly reduced if data discomfort leads to reduced engagement, such as refusal to provide data or requests for data deletion.

Effective strategies to engage those directly affected by an AI project, and the wider body of citizens are crucial to successful use of technology in social care.

The sections above emphasise the need to start with:

  • An understanding of current data maturity
  • Clear, focused project deliverables.

These lead to a key question: what technology does the project need?

GenAI is not necessarily needed and may be unsuitable for many data driven projects in social care. Analytical AI or rules-based automation may be more appropriate. Alternatively, an AI tool may have a defined role within a project of wider scope.

AI and automation are simply tools (though powerful ones) that can be deployed in service improvement projects. Success and risk will depend on ensuring they are only used where they are the right tool for the job.

Risk management in AI projects

Pressure on social care systems, promises of AI advocates, competing priorities, and data and technology maturity levels mean data and AI projects are noisy. From ‘AI optimists’ to ‘data defeatists’ every stakeholder is likely to have different insight and views on opportunity and risk. This, itself, can impact project delivery.

Think through a carefully designed risk checklist, including:

  • Deliverables
  • Data maturity
  • Product knowledge & choice
  • Data management
  • Stakeholder management.

Risk managers can take a critical role in objectively assessing ‘do nothing’ project and product risks to bring clarity to the risk landscape around any AI deployment, supporting informed decisions and successful outcomes.