This is the charity AI risk register and AI for charities risk assessment template, which informed our work in creating the Charity AI Governance & Ethics Framework. I split AI risks into near, medium and long term and also in terms of the risk to everyone and the specific charity AI risks.
It can be used by any charity (or anybody else) to quickly gain an oversight of AI risk. You can also copy the AI risk register or take from it the AI risks relevant to you and include these in your own charity risk register.
The 2nd section enables you to assess if AI is a risk or an opportunity for your own charity. At the end I've included our charity AI FAQs.
The AI Governance & Ethics Framework can be used as a simple AI risk mitigation and avoidance tool. For those designing of commissioning AI systems, you may also want to have a look at the AI Design Principles guide and also our Charity AI Data Protection Toolkit.
This risk register template details the main AI risks in likely chronological order, split into AI risks for all of us and, if applicable, specific charity AI risks.
|AI Risks for Everyone
|Charity AI Risks
|Large language models (LLMs) are populated by scraping data from the Internet and the default in using many AI systems is to share your data with the LLM data set.
|Online content created by charities may be shared and used by others without any regard to copyright and IP and there is a risk of charity people unwittingly and without appropriate consent, inputting sensitive personal and financial data into LLM data sets
|2. Legal Issues
|Data is being input into LLMs without the permission of the content creators and disregarding copyright and other IP, which can then be used by anyone.
|In many cases, charities may be happy that their content is used more widely but there may be issues around imagery, particularly of children and women.
|AI will enable disinformation to be made far more convincing. There are guardrails but also an increasing number of jail break websites that enable these to be circumvented.
|Charities campaign to counter abuse and discrimination but AI will enable those creating this to do so in a far more convincing and compelling way. There is also a risk of charities using AI generated imagery without clearly annotating it as such and, in doing so, undermining public trust.
|AI is already enabling far more convincing scams.
|There is a growing risk to charities and their beneficiaries. More widely, as people become less confident in their ability to tell a scam from a genuine fundraising campaign, scams will underline the public's trust, making people less confident about donating.
|AI does not itself discriminate but the data sets used and how these are trained may well do so. Ensuring this doesn't happen may well slow adoption and would increase cost, so there's a real risk of this not being done correctly.
|There are risks of discrimination to charity beneficiaries. Charities need also to ensure that AI systems they adopt or create are not themselves discriminatory.
|Increasingly systems are moving online and AI has significant potential to both make these more accessible and move further support online. This may create a risk of moving 100% to online support, further excluding those who cannot or are unwilling to use online systems.
|Charities need to be mindful of ensuring their services are accessible to everyone and that these may need extending to plug any gaps in provision by others created by moving entirely to AI delivery of support, in areas such as the NHS and social services.
|AI Risks for Everyone
|Charity AI Risks
|Organisations that fail to respond and cling to services and/or business models that people won't need any longer, or they can access for free or more effectively or more easily using AI. There are going to be losers.
|The risk may be greater for charities that are larger and national, because they are less likely to have very niche or place based activities that are too difficult or small to make it worth building AI for. Our risk checklist is at the bottom of our Impact of AI on Charities Insight Report.
|2. Loss Of Human Agency
|We could design jobs to retain human agency and achieve more, or cut jobs and dumb down work to cut costs. If we just cut costs there is a risk of job losses and also the mental health impact on those who remain in work.
|AI offers charities and huge opportunity to achieve more and our people are our single biggest asset. Dumbing down charity jobs may save some money, but there would be a much greater loss in terms of effectiveness and impact on our people.
|3. Moral Outsourcing
|A phrase coined by Rumman Chowdhury. Blaming the machine by applying logic of sentience and choice to AI. In doing so, , allowing those creating AI to effectively reallocate responsibility for the products they build onto the products themselves, rather than taking responsibility. The issues with AI arise because of the data sets we choose, how we train these and how we use the AI itself. We need to own the problem.
|4. Dumbing Down Creativity
|AI is already being used to create art and simpler news content. If this becomes widespread, it may well impact on human artists and the quality of news and other content may well be dumbed down to become AI 'supermarket muzak'. It may be art, but the creative magic has gone.
|AI Risks for Everyone
|Charity AI Risks
|1. Digital Moats
|In Sep 23, the UK Competition & Markets Authority highlighted the risk of the AI market falling into the hands of a small number of companies, with a potential short-term consequence that consumers are exposed to significant levels of false information, AI-enabled fraud and fake reviews. In the long term, it could enable firms to gain or entrench positions of market power, and also result in companies charging high prices for using the technology.
|Economic moat was a term coined by Warren Buffet about the potential for companies to become so dominant that they exclude all competition. The risk that the small number of very large charities, which already receive the vast majority of sector fundraising income, will use their significant digital expertise to become even more dominant, to the detriment of smaller charities. Our AI Steppingstones Strategy was, in part, created in response to this risk.
|2. Digital Super Exclusion
|A Charity Excellence concept, so just our work. AI has the potential to hugely improve accessibility. However, there will always be those who cannot or will not use digital, often the most vulnerable. It seems likely to us that organisations may either switch off (existing) legacy systems as too expensive for the remaining very small numbers or assume they've solved the problem and switch these off as no longer needed. The result would be far fewer digitally excluded people but those who are would become 'super excluded'.
|3. The Paperclip Maximiser
|A 2003 thought experiment, in which a computer wiped out humanity, because we were getting in the way of its primary aim to maximise paperclips. This is obviously getting a lot of media attention, but we don't think it's a major risk until we achieve Artificial General Intelligence (AGI); human like cognitive abilities. Nobody knows, but we think it's probably still years way and may only be partly achievable.
|4. Loss of Critical Thinking by Humans
|In the same way that many lost their mental arithmetic skills with the advent of calculators and Excel, there is a risk we may slowly lose our critical thinking abilities, if we outsource this to AI. This may sound daft but has been flagged as a potentially serious and invidious risk by some leading thinkers, because critical thinking abilities are critical (funnily enough) to just about everything we do, from simple day-to-day decisions to 'should I press the button and start a nuclear war?'
Frontier AI is highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models. In Oct 23, the Government Office for Science published the Future Risks of Frontier AI, including capabilities, other uncertainties and scenarios.
I think those at most risk will be charities that fail to respond and cling to services and/or business models that people won't need/want any longer. I'm sure your services are excellent and that charity AI probably couldn't replace these but that may well be the right answer to the wrong question.
The right question to ask is, what do your beneficiaries want? And that's often about a range of factors - cost (even if only the bus fare), travel time and hassle to get there, time to get an appointment, breadth of services available, opening hours etc. AI will be free/low cost, is available immediately, 24/7, has huge data banks to draw on and may be able to meet your beneficiaries' needs in a completely different way.
Charity AI is not something that will pose a risk in the future, it's already here and its use is likely to grow rapidly. Here are 2 examples of charity AI use.
The more people chat to them, the better they get and that's without taking into account the rapid advances in AI capabilities.
We believe that charities need to think about AI risk in 3 ways. Which will apply to any given charity and the extent to which each will matter will vary but we think that all charities must take steps to safeguard themselves from AI risks.
To find out if your charity is ready, or at risk, here are my suggested AI risk indicators you may wish to think about:
In addition to the 6 systems within Charity Excellence, we provide a range of free Artificial Intelligence (AI) services. If you'd like the infographic toolkit, with links to our AI resources, email us.
Just click the AI bunny icon in the bottom right of any web page or in-system and tell it what you need. Ask as many questions as you wish to, they're free, available 24/7 and will not collect any personal information.
Charity AI Toolkits & Guides:
Managing charity AI Adoption
Charity AI Training
AI is already increasing the cyber risk to charities, with jail broken AI bots now available. We expect frauds to become increasingly convincing and to pose a significant threat to charities and, particularly, vulnerable beneficiaries. The National Cyber Security Centre (NCSC) provides advice, guidance and support on cyber security. Here is their small business guide and also another for individuals and families.
Plus, 60+downloadable funder lists, 40+ policy templates, 8 online health checks and the huge resource base.
Quick, simple and very effective. Nearly half our ratings are 10/10.