AI For Charities - Charity AI Design Principles

We design AI for charities using our own UK charity AI design principles, so we've published these together with other AI design tools and resources

AI For Charities - Charity AI Design Principles

We design AI for charities using our own charity AI design principles, with pro bono support from expert companies.  This charity AI design tool is a simple guide for those commissioning, building or designing systems, for charities and non profits.

If designing charity AI is a step too far for you, you can make sure you don't miss out by using Biomni's Charity Bot, which uses the same Tenjin AI system our own AI bunnies do.

Do We Need To Design A New Charity AI System At All?

AI is hugely powerful and is (and will) enable charities to both augment their capabilities and remove digital debt. However, there may already be existing and, potentially better, charity AI and non-AI solutions. Before you start designing a new charity AI system, identify all the options and carry out a cost benefit analysis, including financial and non financial costs and benefits.  Some things to think about:

  • Don't be sucked in jumping onto the charity AI bandwagon just because it's the 'shiny new toy' everyone wants.
    • Is there an existing non AI solution?
    • And, if not, is this a problem that lends itself to a charity AI design solution, or maybe not?
  • Has someone already built a charity AI solution for this, or something similar?
    • Does it provide you with the solution you need?
    • If not, what worked well and not so well?
      • Use that to learn from their mistakes.
  • If the environment is part of your charity ethics, you may wish to consider that AI systems are very power intensive.

My thanks to Dorian Harris whose very helpful input led to this section being created.

CHARITY AI DESIGN PRINCIPLES

I make no claim to be any kind of expert but based on my own experience in building AI systems and reading the work of others, these are the principles we have adopted for Charity Excellence and which we have now published as our Charity AI Design Principles.  These are intended to be informed by input by anyone working on this issue and may be used by anyone. If you wish to provide input, please e mail me at ian@charityexcellence.co.uk.

Charity AI Data Quality & Data Set Training

Inadequate or poor quality or badly cleansed charity data and/or inadequacies in training the data set could (and has been known to) create inaccuracies, misinformation and/or bias.

  • Processes have been built into the design/project implementation that will give a high level of confidence this is not the case.
  • That should include assessment of the data and training processes, with testing of outputs to validate this and effective guardrails.

Potential For Exploitation

The potential for AI to be used to manipulate or exploit users, inadvertently or otherwise, such as promoting addictive behaviours or targeting vulnerable users.

  • Processes have been built into the design/project implementation that will give a high level of confidence this is not the case, particularly if users will be vulnerable adults or children.

New AI Cyber Threats

  • The proposed system has been assessed for vulnerabilities to new AI cyber threats, such as LLM prompt injection attacks, and any necessary action has been taken to ensure these will be adequately mitigated.

Onboarding Charity AI Users

Some users may be very receptive but others may be suspicious or feel threatened.

  • AI tools must be user-friendly and trustworthy.
  • Navigation should be intuitive and simple, avoid technical terms and.
  • Produce outputs and reports that are both understandable and useful to those using the system.

Making Charity AI Systems Transparent & Explainable

Explainability is seen as a key pillar of AI governance.  It enables those using or impacted by AI systems to understand and challenge system outcomes/decisions, not least any bias within these.  However, AI systems can be hugely complex and it may not be possible to explain the reasoning behind the results/decisions these may make. This may lead to mistrust and an unwillingness to use a system.  The use of new techniques, such as online continuous experimentation (Bing) may help to overcome this.

  • AI systems should be designed with processes that are transparent and explainable and/or have processes that reduce the risk of mistakes and bias to an acceptable degree.

Integration of Charity AI with Other Systems

AI systems may integrate with and import/export data from other existing systems or there may be real benefits/risks in doing so.

  • Linkages are mapped to identify and resolve any legal or organisational policy implications and the impact on wider issues, such as those below.

Other Design Considerations For Charities

AI implementation may well impact on wider organisational issues, such as policies and training, and/or may require changes to working practices and job roles.

  • For this and other reasons above, genuine staff consultation and communication from the outset, through to post implementation is likely to be critical in successful deployment.

Communicating AI In Your Charity

The use of generative AI is a high profile issue and its use is known to come with serious risks.  The benefits may be uncertain or not understood and many in the sector are tech phobic.

  • Consideration should be given to how external stakeholders will perceive implementation and to communicate the above to them effectively.
  • Avoid jargon and, instead us plain English to articulate the process, timescales, benefits, risks and action being taken to mitigate the risks.

Charity AI Design Risk Management

  • A robust design risk management exercise has been carried out, which extends beyond the system itself.  For example, consideration being given to additional checks and controls by management, restricting access and not using the system for some activities/decisions.
  • Any necessary changes to existing risk systems, policies and procedures have been implemented.
  • Where a system is intended to be used by beneficiaries who may be unable or unwilling to use it, consideration should be given to making alternative provision for them.

CHARITY AI DESIGN RESOURCES AND TOOLS

Here are some AI tools and information that may be useful in thinking about designing AI systems for charities and non profits.

Charity AI Governance and Ethics Framework

Our Charity AI Governance and Ethics Framework has been created to promote responsible use of AI by non profits, by providing a simple, practical and flexible framework within which to manage these ethical challenges.  It should be read in conjunction with these design principles.

The AI framework can be used by charities and non profits to:

  • Create an AI framework for your non-profit and/or.
  • Embed relevant aspects in your existing procedures, such as.
    • Data protection, Equality, Diversity & Inclusion (EDI) and ethical fundraising policies.

For those commissioning, funding or designing AI, it can be attached to RFPs, contracts and grant agreements, or relevant extracts included within these.

Charity AI Risk Management Framework

We converted our own AI risk framework into one that can be used by all charities and others.  It has all of the AI risks we've identified, including some more off the beaten track ones, plus those we think will be of specific concern to charities.

OTHER AI DESIGN RESOURCES & TOOLS

A range of resources and tools of relevance to AI design, including standards and assurance systems.

AI Regulation - Data Protection

In March 2023, the ICO updated its guidance on AI and data protection.

AI Regulation - Markets & Competition

There's not much yet but in September 2023, the Competition & Market Authority published its review into AI Foundation Models, and their impact on competition and consumer protection.  This set out 7 principles designed to make developers accountable, prevent Big Tech tying up the tech in their walled platforms, and stop anti-competitive conduct like bundling.

These are focussed on markets and competition (obviously) but include some good thinking on issues such as open and closed models, interoperability, access, transparency and deployment options.  These have been helpfully summarised in a table.

AI Foundation Model Transparency

In October 2023, Stanford University (Human Centred Artificial Intelligence) published the initial version of its Foundation Model Transparency Index (FMTI) that lays out the parameters for judging a model's transparency.

It grades companies on their disclosure of 100 different aspects of their AI foundation models, including how these were built and used in applications.  I think it’s mainly intended to inform the development of Government regulation of AI but is interesting in that it shows how little transparency there is in practice and the differences between the model developers.

The AI Standards Hub

The Alan Turing Institute website for the AI standards community, dedicated to knowledge sharing, capacity building, and world-leading research.  It aims to build a vibrant and diverse community around AI standards.

CDEI AI Assurance Techniques

The Centre for Data Ethics and Innovation portfolio of AI assurance techniques and how to use it.

AI Safety Policies

For the 2023 AI Safety Summit, the Government requested that leading AI companies outline their AI Safety Policies across nine areas of AI safety:

  • Responsible Capability Scaling provides a framework for managing risk as organisations scale the capability of frontier AI systems, enabling companies to prepare for potential future, more dangerous AI risks before they occur.
  • Model Evaluations and Red Teaming can help assess the risks AI models pose and inform better decisions about training, securing, and deploying them.
  • Model Reporting and Information Sharing increases government visibility into frontier AI development and deployment and enables users to make well-informed choices about whether and how to use AI systems.
  • Security Controls Including Securing Model Weights are key underpinnings for the safety of an AI system.
  • Reporting Structure for Vulnerabilities enables outsiders to identify safety and security issues in an AI system.
  • Identifiers of AI-generated Material provide additional information about whether content has been AI generated or modified, helping to prevent the creation and distribution of deceptive AI-generated content.
  • Prioritising Research on Risks Posed by AI will help identify and address the emerging risks posed by frontier AI.
  • Preventing and Monitoring Model Misuse is important as, once deployed, AI systems can be intentionally misused for harmful outcomes.
  • Data Input Controls and Audits can help identify and remove training data likely to increase the dangerous capabilities their frontier AI systems possess, and the risks they pose.

The Government’s Emerging Processes for Frontier AI Safety complements companies’ safety policies by providing a potential list of frontier AI organisations’ safety policies.

A Free One-stop-shop for Everything You Charity Needs

  • Funding Finder - with categories for Core Funding and Small Charities & Community Groups.
  • Help Finder – including free fundraising help and companies that make donations.
  • Data Finder – for fundraising bids & research, impact reporting, planning and campaigning.

Plus, 60+downloadable funder lists, 8 online health checks and the huge resource base.

Quick, simple and very effective. Nearly half our ratings are 10/10.

Find Funding, Free Help & Resources - Everything Is Free

Register Now!

Free Charity Excellence AI Services & Support

In addition to the 6 systems within Charity Excellence, we provide a range of free Artificial Intelligence (AI) services.

  • Enabling you to find funding, help, resources and data.
  • Ask Me Anything – answering almost any question about running your charity.
  • Funding Bid Writer Service.
  • Welfare support and funding for individuals.
  • Charity & non-profit facts and popular questions, for the public and media.

Just click the AI tech bunny icon in the bottom right of any web page or in-system and tell it what you need.  Ask as many questions as you wish to, they're free, available 24/7 and will not collect any personal information.

Charity AI Insight Briefings: Impact on:

Charity AI Tools & Guides:

Managing charity AI adoption

ChatGPT for Charities

If you're wary of AI, chat to the in-system AI bunny and it'll create and run ChatGPT prompts for you.

Charity AI Training

We have created AI training webinars.

  • Introduction to AI.
  • Using ChatGPT.
  • Introduction to AI for Grant Makers.
  • How AI will impact the charity sector.
Register Now
We are very grateful to the organisations below for the funding and pro bono support they generously provide.

With 40,000 members, growing by 2000 a month, we are the largest and fastest growing UK charity community. How We Help Charities

View our Infographic

Charity Excellence Framework CIO

14 Blackmore Gate
Buckland
Buckinghamshire
United Kingdom
HP22 5JT
charity number: 1195568
Copyrights © 2016 - 2024 All Rights Reserved by Alumna Ltd.
Terms & ConditionsPrivacy Statement
Website by DJMWeb.co
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram