Watch our Video

Charity AI Risk Assessment Template and Risk Register

A free, simple to use and understand charity AI risk assessment template and risk register to enable you to identify and manage charity AI risk

Charity AI Risk Assessment Template and Risk Register

This is the charity AI risk assessment template and risk register, which informed our work in creating the Charity AI Governance & Ethics Framework. In making my risk assessment, I split AI risks into near, medium and long term and also in terms of the risk to everyone and the specific charity AI risks.

It can be used by any charity (or anybody else) to quickly gain an oversight of AI risk.  You can also copy the AI risk register or take from it the AI risks relevant to you and include these in your own charity risk assessments and register.  You can also download an AI and Data Protection Risk Register toolkit and resources to assess and manage the risks for your own charity, from the risk register questions in the Governance and Risk questionnaires.

The 2nd section enables you to assess if AI is a risk or an opportunity for your own charity.  At the end I've included other AI risk management resources and our charity AI FAQs.

The AI Governance & Ethics Framework can be used as a simple AI risk mitigation and avoidance tool.  For those designing of commissioning AI systems, you may also want to have a look at the AI Design Principles guide and also our Charity AI Data Protection Toolkit.

Charity AI Risk Assessment - Main Areas of Risk

We believe that charities need to think about AI risk in 3 ways.  Which will apply to any given charity and the extent to which each will matter will vary but we think that all charities must take steps to safeguard themselves from AI risks.

  • Internal AI Risks - Poor AI Adoption.  Either failing to adopt AI or not using AI systems correctly and safely.
    • Not achieving the charitable impact the charity could and should have and/or.
    • Creating a negative impact on beneficiaries and/or non-compliance with the law.
  • External AI Risks - Bad Actors.  The direct risk to charities and their beneficiaries.
    • For example, being targeted for fraud or misinformation.
  • Societal AI Risks - Poor Law & Regulation.  The risk of not creating effective AI regulation globally and nationally, leading to either deliberate unfair exploitation or unintended harm.
    • For example, campaigning charities in fields such as human rights.

Specific Factors That Increase AI Risk to a Charity

Key AI risks will be from others using AI and AI attacks may well be aimed at disruption, so all nonprofits face growing AI risks, no matter how small and no matter whether they use AI or not. However, some will be at greater risk.  Use the checklist below to assess if you are at greater risk:

  • We hold significant amounts of sensitive personal or valuable financial data, such as credit card details that would make us a target for scammers.
  • We are part of a community that may well be targeted for abuse and/or misinformation, such as our Jewish or Muslim, or LGBTQI+ communities.
  • We work with women and/or children who are more likely to be the targets of deepfake or other abuse.
  • Our beneficiaries are more likely to be susceptible to AI attack, such as younger or older people, or those with a learning disability.
  • We use digital and IT systems extensively and/or we make substantial use of AI in our work and/or we have only recently begun using AI.
  • Our trustees/management are dismissive of AI and/or have only limited understanding of the fundamental impact it will have and have not taken any steps to manage this.

CHARITY AI RISK ASSESSMENT TEMPLATE AND REGISTER

This risk register template details the main AI risks in likely chronological order, split into AI risks for all of us and, if applicable, specific charity AI risks.

Charity AI Risk Register - Current

AI Risks for Everyone Charity AI Risk Assessment
1.  Privacy  Large language models (LLMs) are populated by scraping data from the Internet and the default in using many AI systems is to share your data with the LLM data set. Online content created by charities may be shared and used by others without any regard to copyright and IP and there is a risk of charity people unwittingly and without appropriate consent, inputting sensitive personal and financial data into LLM data sets
2. Legal Issues Data is being input into LLMs without the permission of the content creators and disregarding copyright and other IP, which can then be used by anyone. In many cases, charities may be happy that their content is used more widely but there may be issues around imagery, particularly of children and women.
3. Disinformation AI will enable disinformation to be made far more convincing.  There are guardrails but also an increasing number of jail break websites that enable these to be circumvented. Charities campaign to counter abuse and discrimination but AI will enable those creating this to do so in a far more convincing and compelling way.  There is also a risk of charities using AI generated imagery without clearly annotating it as such and, in doing so, undermining public trust.
4. Scams AI is already enabling far more convincing scams. There is a growing risk to charities and their beneficiaries.  More widely, as people become less confident in their ability to tell a scam from a genuine fundraising campaign, scams will underline the public's trust, making people less confident about donating.
5. Discrimination AI does not itself discriminate but the data sets used and how these are trained may well do so.  Ensuring this doesn't happen may well slow adoption and would increase cost, so there's a real risk of this not being done correctly. There are risks of discrimination to charity beneficiaries. Charities need also to ensure that AI systems they adopt or create are not themselves discriminatory.
6. Deepfakes Use of deepfakes to spread misinformation, scams and to abuse individuals, particularly women and children. These are most likely to target the vulnerable, so represent a risk not only to charities but also their beneficiaries. Here's our AI Best Practice guide to deepfakes.

Charity AI Risk Register - 2025 Onwards

AI Risks for Everyone Charity AI Risk Assessment
1. Obsolescence Organisations that fail to respond and cling to services and/or business models that people won't need any longer, or they can access for free or more effectively or more easily using AI.  There are going to be losers. The risk may be greater for charities that are larger and national, because they are less likely to have very niche or place based activities that are too difficult or small to make it worth building AI for. Our risk checklist is at the bottom of our Impact of AI on Charities Insight Report.
2. Loss Of Human Agency  We could design jobs to retain human agency and achieve more, or cut jobs and dumb down work to cut costs.  If we just cut costs there is a risk of job losses and also the mental health impact on those who remain in work. AI offers charities and huge opportunity to achieve  more and our people are our single biggest asset.  Dumbing down charity jobs may save some money, but there would be a much greater loss in terms of effectiveness and impact on our people.
3. Moral Outsourcing A phrase coined by Rumman Chowdhury.  Blaming the machine by applying logic of sentience and choice to AI.  In doing so, , allowing those creating AI to effectively reallocate responsibility for the products they build onto the products themselves, rather than taking responsibility.  The issues with AI arise because of the data sets we choose, how we train these and how we use the AI itself.  We need to own the problem.
4. Dumbing Down Creativity AI is already being used to create art and simpler news content.  If this becomes widespread, it may well impact on human artists and the quality of news and other content may well be dumbed down to become AI 'supermarket muzak'. It may be art, but the creative magic has gone.

Charity AI Risk Register - 2025 - 2050

AI Risks for Everyone Charity AI Risk Assessment
1. Digital Moats In Sep 23, the UK Competition & Markets Authority highlighted the risk of the AI market falling into the hands of a small number of companies, with a potential short-term consequence that consumers are exposed to significant levels of false information, AI-enabled fraud and fake reviews.  In the long term, it could enable firms to gain or entrench positions of market power, and also result in companies charging high prices for using the technology. Economic moat was a term coined by Warren Buffet about the potential for companies to become so dominant that they exclude all competition.  The risk that the small number of very large charities, which already receive the vast majority of sector fundraising income, will use their significant digital expertise to become even more dominant, to the detriment of smaller charities.  Our AI Steppingstones Strategy was, in part, created in response to this risk.
2. Digital Super Exclusion A Charity Excellence concept, so just our work.  AI has the potential to hugely improve accessibility.  However, there will always be those who cannot or will not use digital, often the most vulnerable.  It seems likely to us that organisations may either switch off (existing) legacy systems as too expensive for the remaining very small numbers or assume they've solved the problem and switch these off as no longer needed.  The result would be far fewer digitally excluded people but those who are would become 'super excluded'.
3. The Paperclip Maximiser A 2003 thought experiment, in which a computer wiped out humanity, because we were getting in the way of its primary aim to maximise paperclips.  This is obviously getting a lot of media attention, but we don't think it's a major risk until we achieve Artificial General Intelligence (AGI); human like cognitive abilities.  Nobody knows, but we think it's probably still years way and may only be partly achievable.
4. Loss of Critical Thinking by Humans In the same  way that many lost their mental arithmetic skills with the advent of calculators and Excel, there is a risk we may slowly lose our critical thinking abilities, if we outsource this to AI.  This may sound daft but has been flagged as a potentially serious and invidious risk by some leading thinkers, because critical thinking abilities are critical (funnily enough) to just about everything we do, from simple day-to-day decisions to 'should I press the button and start a nuclear war?'

AI Risk Management Options

The best response to any AI risk depends on a range of factors, such as the risk, the organisations role and size.  However, here are a range of AI risk management actions that you might consider. Ensure that:

  • Your people are both supported and adequately trained in using AI.
  • Your cyber security and data protection procedures are robust and consistently being applied.
  • Policies and procedures have been updated to reflect the changes AI is bringing.
  • You have clear guidance on what AI tools can be used for, and what they must not be used for.
  • AI-generated content is always reviewed by a human before being used externally or for decision-making.
  • Sensitive or personal data is never entered into AI tools unless you are certain it complies with GDPR.
  • You have a process for regularly reviewing AI use and its impact on your beneficiaries, volunteers/staff, and operations.
  • You understand the limitations of AI and avoid using it for tasks that require specialist knowledge or judgement.
  • You monitor developments in AI to stay informed about emerging risks and opportunities.
  • For any AI procurement or use of new systems, there is a robust evaluation process and sign off at CEO/board level.
  • Responsibility for ensuring the above is delegated to a named individual or team.
  • Your risk register is updated to include new AI risks and any changes to existing risks.

AI Risk Assessment - Is AI a Risk or Opportunity for my Charity?

To find out if your charity is ready, or at risk, use our charity AI roadkill toolkit, which gives you the questions you need to ask yourself to find out.

Free Charity AI Services and Resources

You can also download a shareable infographic from the bottom of any AI page on this website, including this one.

Where Can I Find More AI Risk Resources?

The National Cyber Security Centre (NCSC) provides advice, guidance and support on cyber security.  Here is their small business guide and also another for individuals and families.

And the UK ICO has produced an AI and data protection risk toolkit.  They also have an AI Resource that covers wider issues, including information security and integrity, transparency, data protection by design, discrimination and bias, and human review.

In the US, NIST has an AI Risk Management Framework, which is intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

And here's the OWASP Machine Learning Security Top Ten.

Charity AI FAQs

  • Does AI pose a risk to charity jobs?  Some routine charity jobs will be replaced by AI, but most will be changed and new jobs will be created, so it will not negatively impact charity sector jobs.
  • What charity jobs can’t AI replace?  Some routine, repetitive jobs will go, and the vast majority will change significantly but AI can’t replace jobs that require passion, creativity, emotional intelligence, judgement or insight.
  • What are the risks in using AI to write funding bids?  AI can be used to write grant applications, but it knows nothing about your charity, its plans or your project, so you must include the significant amount of detail required for a grant application or it will not be very good and/or will include made up information (hallucinating).
  • Does AI pose a risk to charity bid writers' jobs?  AI can write very effective grant bids, but it will not replace grant writers because it does not have the insight, flair or creativity of a good grant writer and lacks understanding of context and can struggle with long form content.
  • Are there any free AI tools for grant writing?  The Charity Excellence in-system AI bunny can write funding bids for you, but you must register and login to use it – everything is free.

A Free One Stop Shop for Everything Your Charity Or Community Group Needs

A registered charity ourselves, the CEF works for any non profit, not just charities.

Plus, 60+ policies, 8 online health checks and the huge resource base.

Quick, simple and very effective.

Find Funding, Free Help & Resources - Everything Is Free.

Register Now!

 

Register Now
We are very grateful to the organisations below for the funding and pro bono support they generously provide.

With 40,000 members, growing by 3500 a month, we are the largest and fastest growing UK charity community. How We Help Charities

View our Infographic

Charity Excellence Framework CIO

14 Blackmore Gate
Buckland
Buckinghamshire
United Kingdom
HP22 5JT
charity number: 1195568
Copyrights © 2016 - 2024 All Rights Reserved by Alumna Ltd.
Terms & ConditionsPrivacy Statement
Website by DJMWeb.co
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram