``
AI is already shaping how charities work — often quietly, informally and without clear oversight. To help charities understand what’s really happening, Charity Excellence carried out the Future Charity AI Survey (March 2026) and brought together evidence from sector benchmarking, public attitude research and regulator guidance.
This web page summarises the key learning from the report and what it means in practice for charities today. The full report is available to download below. This will become the baseline of Part 2 which will assess what effective charity sector would look like in an AI enabled world and what we need to do to get there.
We are very grateful to the GSR Foundation whose funding makes our work possible and also others whose work informed our own (scroll to the bottom for these).

“AI is being used informally, without anyone really talking about it.”
Most UK charities are already using AI in some form, typically through individual staff or volunteers using tools like Microsoft Copilot. However, it is not often being seen within the strategic context of the huge impact it will have on Society. Only a very small minority of charities are deploying AI with clear organisational oversight, agreed policies or trustee‑level ownership.
This creates a growing gap between what is happening in practice and what boards believe is happening, even though trustees remain legally responsible for AI‑related risks such as data protection, safeguarding and bias.
Learning: AI use has continued to accelerate but governance has not kept pace.
“Our organisation understands the importance of keeping up with AI, otherwise we risk being left behind.”
Around two thirds of charities describe themselves as exploring or experimenting with AI. Fewer than one in four have approved tools, policies or training in place, and fully embedded use remains uncommon.
Despite this early stage, pressure to engage is strong. Many charities feel they cannot afford to ignore AI, even if they are unsure how to proceed safely.
Learning: Momentum is building faster than confidence.
“It feels like standing at the base of a mountain and not knowing which path to take.”
Just over half of charities strongly agree that AI could benefit their organisation. At the same time, concern is widespread. Data protection is the single biggest worry, followed by safeguarding, ethics and reputational risk.
A notable minority of charities still believe AI is not relevant to their work at all, including some grant makers, highlighting how uneven understanding remains across the sector.
Learning: Interest is high, but fear and uncertainty are holding many charities back.
“We haven’t formally discussed AI at a Trustee meeting yet.”
Charity Excellence system benchmarking data shows that practical risk controls — such as data protection measures, human review of AI‑generated funding bids, and safeguarding in AI‑enabled meetings — are improving.
However, all three board‑level AI governance controls remain rated Red across the sector:
Learning: Charities are managing immediate risks, but struggling to embed AI into governance and management.
Public attitudes towards Charity Sector AI are not hostile, but they are cautious. Around a third of people feel positive, a quarter feel negative, and the rest are unsure. Support is strongest where AI is used to:
Trust drops sharply when AI is perceived to:
Learning: How charities use AI matters more than whether they use it. Trust also varies depending on the audience, cause and the type of organisation.
Both charities and the public express unease about AI‑generated imagery. Many charities have tried it and then stopped, citing ethical, reputational or authenticity concerns.
Research shows strong public support for authentic imagery, with lower acceptance where AI images appear realistic or emotionally manipulative, especially in sensitive contexts.
Learning: AI imagery needs careful, values‑led decision‑making and transparency.
“It’s not really resistance. It’s that we don’t know enough yet.”
Where charities hesitate, it is rarely due to blanket opposition to AI. The main barriers identified are:
Charities are clear about what they need next: plain‑English training, practical guidance, ready‑to‑use policies and funding to build capacity.
Learning: Confidence will come from support, not pressure.
Resources. Charity Excellence provides the following free AI support.
“People are already using it day to day, but it’s not really acknowledged and there are no guardrails.”
The evidence shows that Charity Sector AI is already part of day‑to‑day reality but the sector is at risk of moving forward without shared standards, confidence or trust.
Charities that succeed will be those that:
📄 AI Future Charity Report – Part 1: Where the Charity Sector Is Now
A detailed, evidence‑based analysis of Charity Sector AI use, attitudes, risks and support needs in 2026.
👉 AI Future Charity Report Part 1 April 2026
A registered charity ourselves, the CEF works for any non profit, not just charities.
Plus, 60+ policies, 8 online health checks, the Quality Mark and the huge resource base. Our AI Ready programme and Charity Excellence Learning free online AI training courses, give non profits everything they need to make effective use of AI and stay safe.
Find Funding, Free Help & Resources - Everything Is Free.
Charity Excellence data.
We are also very happy to recognise the work of others that was used in creating the report.
The methodology used is detailed at the bottom in in the downloadable full Charity AI Survey 2026 report.
The Charity Excellence AI Survey Agent was used in adducing and analysing data but under the direction and control of a human.
AI is already widely used by UK charities, most often through individual staff or volunteers using tools such as generative AI. However, this use is rarely strategic and is often informal, with limited organisational oversight or trustee involvement.
The main concern is the growing gap between day‑to‑day AI use and governance. Trustees remain legally responsible for risks such as data protection, safeguarding and bias, yet many boards are unaware of how AI is already being used within their organisations.
Most charities are still at an early stage. Around two thirds describe themselves as exploring or experimenting with AI, while fewer than one in four have approved tools, policies or training in place. Fully embedded use remains uncommon.
Most charities see AI as both an opportunity and a risk. Just over half strongly agree AI could benefit their organisation, but concern is widespread. Data protection is the single biggest worry, followed by safeguarding, ethics and reputational risk.
Key governance controls remain weak across the sector: strategic assessment of AI’s impact, clear trustee or committee responsibility for AI, and organisation‑wide training and compliance. This makes it harder to embed AI safely into management and decision‑making.
Public attitudes to charity sector AI are cautious rather than hostile. Around a third of people feel positive, a quarter feel negative, and the remainder are unsure. Trust depends heavily on how AI is used and for what purpose.
Trust is strongest when AI is used to protect funds, detect fraud or improve back‑office efficiency. It drops sharply when AI is seen to influence decisions about who receives support, replace human judgement, or use sensitive personal data without transparency.
Both charities and the public express unease about AI‑generated imagery. Many charities have tried it and then stopped due to ethical, reputational or authenticity concerns. Public acceptance is lower where images appear emotionally manipulative or are used in sensitive contexts.
Charities consistently ask for plain‑English training, practical guidance, ready‑to‑use policies and templates, and funding or time to build capacity. Confidence is more likely to come from support and clarity than pressure to move faster.