Charity AI Risk Assessment Template and Risk Register

A free, simple to use and understand charity AI risk assessment template and risk register to enable you to identify and manage charity AI risk

Charity AI Risk Assessment Template and Risk Register

This is the charity AI risk assessment template and risk register, which informed our work in creating the Charity AI Governance & Ethics Framework. In making my risk assessment, I split AI risks into near, medium and long term and also in terms of the risk to everyone and the specific charity AI risks.

It can be used by any charity (or anybody else) to quickly gain an oversight of AI risk.  You can also copy the AI risk register or take from it the AI risks relevant to you and include these in your own charity risk assessments and register.

The 2nd section enables you to assess if AI is a risk or an opportunity for your own charity.  At the end I've included other AI risk management resources and our charity AI FAQs.

The AI Governance & Ethics Framework can be used as a simple AI risk mitigation and avoidance tool.  For those designing of commissioning AI systems, you may also want to have a look at the AI Design Principles guide and also our Charity AI Data Protection Toolkit.


This risk register template details the main AI risks in likely chronological order, split into AI risks for all of us and, if applicable, specific charity AI risks.

Charity AI Risk Register - Current

AI Risks for Everyone Charity AI Risk Assessment
1.  Privacy  Large language models (LLMs) are populated by scraping data from the Internet and the default in using many AI systems is to share your data with the LLM data set. Online content created by charities may be shared and used by others without any regard to copyright and IP and there is a risk of charity people unwittingly and without appropriate consent, inputting sensitive personal and financial data into LLM data sets
2. Legal Issues Data is being input into LLMs without the permission of the content creators and disregarding copyright and other IP, which can then be used by anyone. In many cases, charities may be happy that their content is used more widely but there may be issues around imagery, particularly of children and women.
3. Disinformation AI will enable disinformation to be made far more convincing.  There are guardrails but also an increasing number of jail break websites that enable these to be circumvented. Charities campaign to counter abuse and discrimination but AI will enable those creating this to do so in a far more convincing and compelling way.  There is also a risk of charities using AI generated imagery without clearly annotating it as such and, in doing so, undermining public trust.
4. Scams AI is already enabling far more convincing scams. There is a growing risk to charities and their beneficiaries.  More widely, as people become less confident in their ability to tell a scam from a genuine fundraising campaign, scams will underline the public's trust, making people less confident about donating.
5. Discrimination AI does not itself discriminate but the data sets used and how these are trained may well do so.  Ensuring this doesn't happen may well slow adoption and would increase cost, so there's a real risk of this not being done correctly. There are risks of discrimination to charity beneficiaries. Charities need also to ensure that AI systems they adopt or create are not themselves discriminatory.

Charity AI Risk Register - 2024 Onwards

AI Risks for Everyone Charity AI Risk Assessment
1. Obsolescence Organisations that fail to respond and cling to services and/or business models that people won't need any longer, or they can access for free or more effectively or more easily using AI.  There are going to be losers. The risk may be greater for charities that are larger and national, because they are less likely to have very niche or place based activities that are too difficult or small to make it worth building AI for. Our risk checklist is at the bottom of our Impact of AI on Charities Insight Report.
2. Loss Of Human Agency  We could design jobs to retain human agency and achieve more, or cut jobs and dumb down work to cut costs.  If we just cut costs there is a risk of job losses and also the mental health impact on those who remain in work. AI offers charities and huge opportunity to achieve  more and our people are our single biggest asset.  Dumbing down charity jobs may save some money, but there would be a much greater loss in terms of effectiveness and impact on our people.
3. Moral Outsourcing A phrase coined by Rumman Chowdhury.  Blaming the machine by applying logic of sentience and choice to AI.  In doing so, , allowing those creating AI to effectively reallocate responsibility for the products they build onto the products themselves, rather than taking responsibility.  The issues with AI arise because of the data sets we choose, how we train these and how we use the AI itself.  We need to own the problem.
4. Dumbing Down Creativity AI is already being used to create art and simpler news content.  If this becomes widespread, it may well impact on human artists and the quality of news and other content may well be dumbed down to become AI 'supermarket muzak'. It may be art, but the creative magic has gone.

Charity AI Risk Register - 2025 - 2050

AI Risks for Everyone Charity AI Risk Assessment
1. Digital Moats In Sep 23, the UK Competition & Markets Authority highlighted the risk of the AI market falling into the hands of a small number of companies, with a potential short-term consequence that consumers are exposed to significant levels of false information, AI-enabled fraud and fake reviews.  In the long term, it could enable firms to gain or entrench positions of market power, and also result in companies charging high prices for using the technology. Economic moat was a term coined by Warren Buffet about the potential for companies to become so dominant that they exclude all competition.  The risk that the small number of very large charities, which already receive the vast majority of sector fundraising income, will use their significant digital expertise to become even more dominant, to the detriment of smaller charities.  Our AI Steppingstones Strategy was, in part, created in response to this risk.
2. Digital Super Exclusion A Charity Excellence concept, so just our work.  AI has the potential to hugely improve accessibility.  However, there will always be those who cannot or will not use digital, often the most vulnerable.  It seems likely to us that organisations may either switch off (existing) legacy systems as too expensive for the remaining very small numbers or assume they've solved the problem and switch these off as no longer needed.  The result would be far fewer digitally excluded people but those who are would become 'super excluded'.
3. The Paperclip Maximiser A 2003 thought experiment, in which a computer wiped out humanity, because we were getting in the way of its primary aim to maximise paperclips.  This is obviously getting a lot of media attention, but we don't think it's a major risk until we achieve Artificial General Intelligence (AGI); human like cognitive abilities.  Nobody knows, but we think it's probably still years way and may only be partly achievable.
4. Loss of Critical Thinking by Humans In the same  way that many lost their mental arithmetic skills with the advent of calculators and Excel, there is a risk we may slowly lose our critical thinking abilities, if we outsource this to AI.  This may sound daft but has been flagged as a potentially serious and invidious risk by some leading thinkers, because critical thinking abilities are critical (funnily enough) to just about everything we do, from simple day-to-day decisions to 'should I press the button and start a nuclear war?'

AI Risk Assessment - Frontier AI

Frontier AI is highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models.  In Oct 23, the Government Office for Science published the Future Risks of Frontier AI, including capabilities, other uncertainties and scenarios.

AI Risk Assessment - Is AI a Risk or Opportunity for my Charity?

To find out if your charity is ready, or at risk, use our charity AI roadkill toolkit, which gives you the questions you need to ask yourself to find out.

Charity AI Use - Timescale

Charity AI is not something that will pose a risk in the future, it's already here and its use is likely to grow rapidly.  Here are 2 examples of charity AI use.

  • Our main information hub is hugely popular and the largest in the sector (because we promote the whole sector, not just our own resources).  It took me 5 years to build, and, in just 4 months, the AI bunnies pretty much made it redundant.
  • They can now answer 20,000 non profit questions, with a 90+% success rate, 24/7, and are currently answering 3000 queries a month but can manage 120 a minute and cost almost nothing.  If you chat to it, the in-system bunny can also write funding bids and create and run ChatGPT prompts for you.

The more people chat to them, the better they get and that's without taking into account the rapid advances in AI capabilities.

Charity AI Risk Assessment - Main Types

We believe that charities need to think about AI risk in 3 ways.  Which will apply to any given charity and the extent to which each will matter will vary but we think that all charities must take steps to safeguard themselves from AI risks.

  • Internal AI Risks - Poor AI Adoption.  Either failing to adopt AI or not using AI systems correctly and safely.
    • Not achieving the charitable impact the charity could and should have and/or.
    • Creating a negative impact on beneficiaries and/or non-compliance with the law.
  • External AI Risks - Bad Actors.  The direct risk to charities and their beneficiaries.
    • For example, being targeted for fraud or misinformation.
  • Societal AI Risks - Poor Law & Regulation.  The risk of not creating effective AI regulation globally and nationally, leading to either deliberate unfair exploitation or unintended harm.
    • For example, campaigning charities in fields such as human rights.

What are the Key Charity AI Risk Management Steps?

As with any other charity risk, the key risks and steps to take for AI will depend on your charity and your priorities.  However, here are 4.

  • Human Oversight and Control — AI systems shouldn’t operate autonomously. Implement robust human oversight mechanisms to monitor and intervene when necessary - human in the loop (HITL), which isn’t new or unique to AI.
  • Awareness and Training - as with everything else, people can be either your weakest point or best defence.   Make sure staff are aware of the risks around AI and what they need to do, your policies reflect AI (all downloadable Charity Excellence policies have AI included where necessary) and they've been given any necessary training and/or guidance.
    • Scams and Misinformation - are likely to very significantly increase and be far more sophisticated.  As part of the above, ensure your people and beneficiaries know what they need to know to protect themselves.  Use our charity AI Cyber Security toolkit.
  • Transparency and Explainability — if you are building, commissioning or funding an AI system, ensure those using it understand how it works and the rationale behind its decisions. This helps identify potential biases and vulnerabilities.  Use our AI Design Principles toolkit.

Free Charity AI Services and Resources

Our Charity AI Toolbox explains all of our free charity AI services, with links to all of our charity toolkits, insight briefings, training and the ChatGPT launch pad for charities, as well as our lists of AI resources produced by others.  You can also download a shareable infographic from the bottom of any AI page, including this one.

Where Can I Find More AI Risk Resources?

The National Cyber Security Centre (NCSC) provides advice, guidance and support on cyber security.  Here is their small business guide and also another for individuals and families.

And the UK ICO has produced an AI and data protection risk toolkit.

In the US, NIST has an AI Risk Management Framework, which is intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

And here's the OWASP Machine Learning Security Top Ten.

Charity AI FAQs

  • Does AI pose a risk to charity jobs?  Some routine charity jobs will be replaced by AI, but most will be changed and new jobs will be created, so it will not negatively impact charity sector jobs.
  • What charity jobs can’t AI replace?  Some routine, repetitive jobs will go, and the vast majority will change significantly but AI can’t replace jobs that require passion, creativity, emotional intelligence, judgement or insight.
  • What are the risks in using AI to write funding bids?  AI can be used to write grant applications, but it knows nothing about your charity, its plans or your project, so you must include the significant amount of detail required for a grant application or it will not be very good and/or will include made up information (hallucinating).
  • Does AI pose a risk to charity bid writers' jobs?  AI can write very effective grant bids, but it will not replace grant writers because it does not have the insight, flair or creativity of a good grant writer and lacks understanding of context and can struggle with long form content.
  • Are there any free AI tools for grant writing?  The Charity Excellence in-system AI bunny can write funding bids for you, but you must register and login to use it – everything is free.

A Free One Stop Shop for Everything Your Charity Or Community Group Needs

A registered charity ourselves, the CEF works for any non profit, not just charities.

Plus, 100+downloadable funder lists, 40+ policies, 8 online health checks and the huge resource base.

Quick, simple and very effective.

Find Funding, Free Help & Resources - Everything Is Free.

Register Now!

Register Now
We are very grateful to the organisations below for the funding and pro bono support they generously provide.

With 40,000 members, growing by 2000 a month, we are the largest and fastest growing UK charity community. How We Help Charities

View our Infographic

Charity Excellence Framework CIO

14 Blackmore Gate
United Kingdom
HP22 5JT
charity number: 1195568
Copyrights © 2016 - 2024 All Rights Reserved by Alumna Ltd.
Terms & ConditionsPrivacy Statement
Website by
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram