AI for charities - the Charity AI Governance and Ethics Framework promotes responsible use of AI in the charity sector, by providing a simple, practical and flexible framework within which to manage the ethical challenges of charity AI use
AI for charities - the Charity AI Governance and Ethics Framework promotes responsible use of AI in the charity sector, by providing a simple, practical and flexible framework within which to manage the ethical challenges of charity AI use.
AI governance and ethics is a very complex and fast evolving area, but very soon, all of us in the charity sector will either be using AI or our work will be impacted by systems that use it and the huge uncertainties and risks are now widely accepted.
To support the charity sector in responding to this, we used our own AI ethics procedures, informed by the work of others, to create this Charity AI Governance & Ethics Framework. This is a living document - any comments/suggestions would be very welcome. Send these to me at email@example.com.
If you have any questions about AI or our charity AI services, click the AI bunny icon in the bottom right of your screen and ask it short questions, including key words.
At the end of Mar 23, some of the world’s leading AI experts called for a six-month pause on training the next wave of systems more powerful than ChatGPT4. They warned of:
“an out-of-control race to develop and deploy ever more powerful digital minds that no one –
not even their creators – can understand, predict or reliably control”.
The problem is it’s just 1 of a whole series of new generative AI systems, which are genuinely revolutionary, £ billions are being poured in by the tech giants, and there are potentially eye watering profits to be had. The concerns are very real, but the cat’s out of the bag. Even the CEO of OpenAI (the company that makes ChatGPT) thinks there is a need for tech development to be paced to match moral, legal and ethical development.
The Government's AI regulation white paper sets out its proposals for implementing a framework for regulating AI. It's currently, out for consultation, but our guess is that regulation will probably be 'light touch', to promote economic growth.
However, no-one disagrees about the huge scale and speed of the impact AI will have, or of the enormous challenges this poses. Charities must act now to ensure their use of AI is fair, legal and safe. The AI Governance and Ethics Framework provides a toolkit for the charity sector to do so.
This is the charity AI risk assessment we created for our own charity, which informed our work in creating this AI Framework.
This framework is a simple, practical and flexible tool for anyone using AI in the charity sector. Simply add, amend or delete to meet your needs. Before you get started, maybe read these 25 questions you should ask yourself about AI, from the Wall Street Journal, to challenge your thinking.
This AI framework can be used by charities and non profits to:
For those designing, commissioning or funding AI, it can be attached to RFPs, contracts and grants agreements, or relevant extracts included within these.
This AI governance and ethics framework has been developed to ensure that the use of artificial intelligence (AI) within our charity is ethical and legal, by meeting high standards of fairness, accountability, transparency, privacy, and non-discrimination in developing, running, and overseeing any AI systems we commission, create, develop, manage or use.
This policy applies to all trustees, other volunteers, employees, contractors, and third-party representatives working on our behalf. Its requirements should be reflected in other policies and procedures, agreements and contracts, as necessary.
We define Artificial Intelligence (AI) as the ability of machines or software to perform tasks that would normally require human intelligence. AI systems can process data, learn from it, and make decisions or predictions based on that data. AI is a broad field that encompasses many different types of systems and approaches to machine intelligence, including rule-base AI, machine learning, neural networks, natural language processing and robotics.
AI systems will have appropriate human oversight with humans being responsible for making all final decisions on their output.
We will create any necessary guidelines:
We will ensure that:
Risk analysis has included:
The risks have been clearly identified and quantified, and the avoidance/mitigation action put in place will ensure that the level of risk remains within acceptable limits.
You can use our Charity AI Risk Toolkit to better understand AI risks and create your own AI Risk Register, if you wish to. There is also a supporting toolkit of AI Design principles for those building or commissioning AI systsems.
We have carried out a Data Protection Impact Assessment (DPIA) for AI and made any necessary changes to our policies and procedures. As part of that, insofar as reasonably possible, we will:
We are aware of the ICO guidance on AI and data protection and have reflected any additional requirements in our policies and procedures.
Here is our assessment of the impact of AI on charity sector jobs and what charities need to think about and do in integrating AI into roles and procedures. The Institute for the Future of Work has created a Good Work Algorithmic Impact Assessment that may be of interest, including 10 dimensions of 'good work'.
We will ensure that:
We have included AI scraping of our website data in our risk assessment. Where we do not wish our site pages to be scraped, we have taken steps to minimise the risk of this, whilst recognising that such action may not be respected by AI companies. This might include:
Meta Tags. The use of meta tags, such as those above, has been floated by various companies and agencies but is not an industry standard and there's no guarantee that all, of even any, bots will comply.
T&Cs. Our lay person's understanding of web scraping is that it might be considered illegal if the necessary amount of creative input was used to create the data scraped, which might amount to copyright infringement. We also understand that it is possible to restrict the re-use of scraped data through your T&Cs. That is, if a company accesses your website and consents to your T&Cs, which contain a restriction on the re-use of the data, if they then do so they may be in breach of contract.
We design and build our own AI systems with pro bono support from expert AI companies. We have used our learning to create a separate resource that should be read in conjunction with this one. It provides a simple set of AI design principles for those designing or commissioning AI systems.
Plus, 60+downloadable funder lists, 8 online health checks and the huge resource base.
Quick, simple and very effective. Nearly half our ratings are 10/10.
Find Funding, Free Help & Resources - Everything Is Free
In addition to the 6 systems within Charity Excellence, we provide a range of free Artificial Intelligence (AI) services. If you'd like the infographic toolki, with links to our AI resources, email us.
Just click the AI tech bunny icon in the bottom right of any web page or in-system and tell it what you need. Ask as many questions as you wish to, they're free, available 24/7 and will not collect any personal information.
AI Insight Briefings: Impact on:
Toolkits & Guides:
Managing AI Adoption
ChatGPT for Charities
If you're wary of AI, login and chat to the in-system AI bunny and it'll create and run ChatGPT prompts for you.
We have created 2 one-hour training webinars.
A particular aspect of AI fundraising ethics is bid writing. We think AI has the power to help level the playing field of grant making processes that often result in small charities being less able to secure income and we think that there is a significant risk of it being banned and making the process even more unequal.
Whether you should be awarded a grant depends on how great the unmet need is and how well you would meet that. However, whether you get it often largely depends on how well you understand bid writing, how good you are at writing and, often, how well you know the funder. These are 2 quite different things, making the system often unfair for small and marginalised groups and less effective for grant makers. In 40+ years in the sector, I've been both a grant maker and an applicant and have personal experience of both examples above.
Our AI bid writer asks people a whole series of questions and then uses ChatGPT to turn that into a well written case for support that contains the key information a grant maker needs to make a decision. It's available free to anyone and works for everyone, including those who know nothing about writing funding bids and those who can't write well - for whom English is a 2nd language, or who have learning difficulties or who aren't particularly good at writing prose. And no, it will never be as good as an experienced bid writer. AI simply doesn't have the insider knowledge, creativity or flair they bring. But it does substantially level the unequal playing field by allowing those who can't write good bids to do so and reduces the workload, making bid writing quicker and simpler for them.
We're just testing the waters with this and, already, can see how it is possible to create an end-to-end fundraising process, from search to submission. That would be fairer and reduce workload on applicants and work better for grant makers. But I don't think AI is well understood and I think that the lack of understanding of this very new technology and, potentially vested interests not wanting the competition it'd bring may result in it being banned by grant makers. If it is, this is what I think might potentially happen. The small and marginalised groups would not be able to use it. However, agencies and big trust teams might well be able to use it to create initial drafts and then use experienced bid writers to turn these into high quality bids to submit, making it quicker and easier for them to get bids out. Making an unequal playing field even more so.
The flip side to that is this is a huge opportunity to help level the playing field. Grant makers could use AI to turn their application processes into a series of questions that anyone can answer, in the same way we have done. Then using AI to turn those questions into a standard application for consideration. It's not quite the same as anonymised job applications but follows the same principles. Our bid writer would become obsolete, but we’d be happy with that, because our aim is to enable charity sector adoption of AI, not deliver it.
The Charity AI Ethics & Governance Framework was created with some help from ChatGPT and was also informed by the work of others, including:
Government & Regulatory:
Other Useful Resources:
The framework, or parts of it, may be used by any non-profit or public body to support us all in ensure our use of AI is fair, legal and safe.
However, we are not lawyers, so are not able to offer legal advice, and this is a complex and very fast moving area. If you need legal advice you must seek this from a lawyer, not us. If you use our work, in doing so, you accept that we can have no responsibility whatsoever for any detriment or loss arising from your use and you will seek professional advice, if you need it, and undertake to ensure that your version has been added to, amended and/or parts deleted, as necessary, to ensure that it meets fully your needs and any laws and regulatory requirements applicable. You will also ensure that anything you use our work for is regularly reviewed and kept up to date to ensure it remains legally compliant and appropraite for your organisation.
In using my work, you acceopt that I retain copyright of it and also any derivatives of it you may create, any use of the framework will be appropriately recognised, and it may not be used for commercial purposes without my written prior consent.
The initial draft of this framework included some limited input derived from prompts submitted to ChatGPT. This work was last updated Nov 23.