AI Deepfake Video and Photo
AI deepfake photo and video, and audio is rapidly proliferating and represents a significant risk to charities. Preventing AI deepfakes is very difficult if you post video or photo imagery online but there is a great deal that can be done to mitigate this risk. This charity best practice guide helps you to think through the risks, the degree of risk they may face and identifying what steps they will take to manage that risk. I was targeted by an AI deepfake voice cloning scam in 2024. We are all at risk.
This guide is part of our Charity AI Ready programme. We would welcome constructive criticism to improve it - send it to ian@charityexcellence.co.uk.
What are AI Deepfakes?
AI deepfakes are synthetic media, typically videos or audio recordings, that have been manipulated using artificial intelligence (AI) to create realistic but entirely fake representations of individuals. These can make it appear as though someone is saying or doing something they never did. Bad actors use AI deepfakes for various malicious purposes, such as spreading misinformation, discrediting individuals, committing fraud, and even blackmail. For example, they might manipulate a video to make it seem like a public figure said something controversial.
How Easy is it to Spot Deepfakes?
In Feb 2025, IProov tested 2,000 UK and US consumers and only 0.1% could accurately distinguish all real video and imagery from deepfake content. Deepfake videos proved more challenging than deepfake images, with participants 36% less likely to correctly identify a synthetic video. A third of older people (55+) had never even heard of deepfakes. Try their Can You Spot a Deepfake Quiz? Transparency Note: IProov sell biometric identify services.
How Much of a Threat Are AI Deepfakes?
Growing Threat. The threat of AI deepfakes in the UK has grown significantly in recent years, with the number of deepfake videos online doubling every six months, and the global deepfake AI market is growing rapidly. Currently, the threat is serious and is expected to worsen and the UK government has recognised deepfakes as a significant threat to national security and democracy. There are concerns that deepfakes could be used to influence elections, incite hate, and undermine public trust.
Threat to Charities. The threat to charities is threefold.
- Civil Society. We support civil society and deepfakes represent a significant threat to society itself.
- Charities. There is a threat to charities themselves.
- We will be targeted for scams and.
- More widely, we need the public to trust us, both to feel able to use our services and to donate the funding we need. Deepfakes will undermine trust.
- Beneficiaries. The often very vulnerable people we exist to support will be targets for both scams and misinformation. Our beneficiaries may be targeted:
- Simply because they are vulnerable to exploitation, or.
- Because they belong to a marginalised group and.
- There is also the risk of sexual harassment and abuse of women and children.
The Scale of Deepfake AI Threats
Analysis by My Image, My Choice, a campaign group that tackles image abuse, found 80 per cent of apps which create deepfakes had been launched in the past 12 months. They said one had been downloaded 14 million times, and another processed 600,000 images in the first three weeks after launch. Their analysis found 99 per cent of the false sexually explicit images showed women.
Deepfake AI - Preventing the Download of Video and Photos
A basic deepfake can be created from almost any image. However, there is a great deal that can be done to make accessing images more difficult, as well as avoiding uploading high resolution facial images and not making available images that show multiple angles of your face, and/or expressions. Here are a range of ways to do that you may wish to consider.
Limit Access to Photos - Social Media
- Tighten Social Media Privacy Settings: On social media platforms, adjust your privacy settings so that only trusted people can see your photos. Avoid making images public if possible. You can also make charity groups private.
- Minimise Public Uploads: Therefore, the more public images there are, the easier it is for bad actors to create deepfakes. For example, if you're posting about something, like a new outfit you have, does your face need to be in the photo?
Limit Access to Photos - Web Sites
- Limit Public Access: Use access controls or password protection for certain web pages containing personal images. Limiting who can see the images reduces the risk of them being harvested by bots or malicious users.
- Disable Right-Click Download: Disable the ability to right-click and download images. While this is not fool proof (users can still take screenshots), it adds a layer of difficulty for casual misuse.
Limit How Long Photos are Available For
- Expiring Content: Platforms like Snapchat or Instagram Stories offer features where images expire after a set time. Reducing the time your images are available lowers the chances of them being harvested for deepfake purposes.
- Temporary Images: Use features on websites that allow for temporary or expiring content. Certain CMS platforms or plugins allow images to be visible only for a short time, reducing the window in which they can be harvested for misuse.
How to Make Photos Safer from Deepfakes
A deepfake can be created using just a single image but creating a deepfake works best when they have several images of your face, from different angles, that are high quality and high resolution. Anything that you do, which undermines these factors helps to make it less likely that an image will be used to create a deepfake. Here are ideas for you to consider that could help to reduce the risk of your photos being misused.
Use Photos That are Less Likely to be Targeted
If you’re picking between two images that work equally well:
- Use a photo of a man, rather than a woman or child — they’re more often targeted in harmful deepfakes.
- Choose group photos instead of close-ups of individuals.
- Use photos without people when possible — landscapes, objects, or settings.
Use Lower Image Quality
Deepfake tools need crisp pixel data.
- 1080 × 1080 px is Instagram’s standard feed size; Facebook also targets ~1080 px width for posts.
- 800 × 600 px cuts the width by ~25%, removing detail AI could use.
Going even smaller (e.g., 640 × 480 px) further degrades facial features, but beware of making images too pixelated for your audience.
- Export at modest pixel dimensions (800 × 600 px or lower).
- Keep file size small (under ~200 KB) via JPEG compression.
Add Filters or Effects
Simple edits can confuse AI tools that try to copy faces.
- Use filters like posterization, noise, or stylised colours.
- Effects that change texture or shape make deepfake creation harder.
Hide or Blur the Face
Obscuring parts of the face reduces the data AI can use.
- Use sunglasses, hats, or masks to cover parts of the face.
- Try light blurring or pixelation — enough to reduce detail, but still usable.
- Add stickers or emojis on social media to block key features.
Use Watermarks
Watermarks make it harder to reuse your image convincingly.
- Add a visible watermark to public photos.
- Use invisible watermarking tools to track where your image appears online.
- This won’t stop deepfakes, but it helps with detection and takedown.
Add Noise or Grain
A faint layer of speckles can confuse AI without changing how the photo looks to people.
- Use your editor’s “noise” or “film grain” filter.
- Keep the effect light—just enough to add a fine texture.
- The tiny speckles break up sharp facial lines, making it harder for AI to map faces.
- To viewers, the image still appears natural.
Crop or Use Partial Images
- Avoid posting full-face, front-facing photos.
- Use side profiles or cropped images that show only part of the face.
- Focus on clothing, background, or activity instead of clear facial shots.
Deepfake AI Tools
Use software to combat deepfakes, such as water marks, partially obscuring faces etc.
AI Detection Tools: Some online tools can detect deepfake content, helping you monitor if your images have been used without your knowledge. I'm no expert and no doubt there are others but here are 2 that were free when I wrote this - Deepware.ai and Sightengine.com.
- Deepfake Blocking Services: Certain platforms are developing AI tools to flag and block deepfakes. Keeping track of these tools and using platforms that employ them can help.
AI Distortion Tools
- Fawkes and Similar Tools: Tools like Fawkes, developed by the University of Chicago, can cloak facial images. These tools make subtle, invisible alterations to the image that confuse facial recognition and deepfake software.
- Glaze: Like Fawkes, Glaze helps artists protect their images by altering their facial and visual features in a way that’s invisible to humans but prevents AI from learning to mimic their work.
- Image Composition: Using group rather than individual images with more complex backgrounds may help.
Facial Recognition Blocking: Some software tools can block or alter the metadata in images, making it harder for AI systems to detect and extract faces from images.
How to Identify Potential Deepfake Photos
Unnatural Facial Features
- Look for expressions that seem frozen or emotionless, especially around the eyes and mouth.
- Skin may appear unnaturally smooth, with lighting that doesn’t match the rest of the image.
- Example: A face that looks airbrushed or oddly lit compared to the surroundings.
Inconsistent Lighting and Shadows
- Check if light sources and shadows match the environment.
- Deepfakes often struggle with realistic lighting angles.
- Example: A shadow falling in the wrong direction or missing entirely.
Blurry or Distorted Areas
- Watch for blurry patches or double edges, especially around facial features.
- These can be signs of AI blending errors.
- Example: Blurring around the eyes, mouth, or hairline.
Odd or Inconsistent Backgrounds
- Look for backgrounds that seem artificially generated or poorly blended.
- Deepfake photos may include repeating patterns, warped objects, or mismatched depth.
- Example: A bookshelf that curves unnaturally, or identical plants copied across the image.
Lack of Fine Detail
- Deepfakes often miss natural imperfections like frizzy hair, pores, or subtle wrinkles.
- Example: Hair that looks painted on or skin that’s unnaturally smooth.
Watermarks and Digital Artifacts
- Look for anything that seems off, such as blurry patches, rough edges, or odd shadows near someone’s face.
- These can be signs of editing. Some fake videos leave behind tiny clues — like leftover marks from the software that made them.
- Example: jagged lines around objects, or a faint ghost-like glow near someone’s eyes or mouth.
Audio and Video Deepfakes
While imagery deepfakes are a concern, audio and video deepfakes pose a more immediate threat due to their dynamic and engaging nature. Imagery deepfakes can be used to create fake photos or manipulate existing ones, but audio and video deepfakes can more convincingly mimic real people and situations, making them harder to detect and more impactful.
Identifying Audio Deepfakes.
Listen for Inconsistencies: Pay attention to unnatural pauses, changes in tone, or background noise that seems out of place.
- Check for Metadata: Look at the file's metadata for any signs of tampering or inconsistencies.
- Use AI Detection Tools: Tools like Resemble AI's Detect can analyse audio for signs of deepfake manipulation.
- Verify with Known Samples: Compare the suspicious audio with known, verified samples of the person's voice.
A January 2025 survey found that a quarter of UK consumers received a deepfake voice call in the past year. Which, March 2025.
Combating Audio Deepfakes.
Watermarking: Embed imperceptible watermarks in audio files to track and verify authenticity.
- Educate and Raise Awareness: Inform the public about the existence and dangers of deepfakes.
- Legal Action: Report deepfake audio to authorities and pursue legal action if it's used for malicious purposes.
Identifying Video Deepfakes.
- Look for Visual Anomalies: Check for unnatural blinking, inconsistent lighting, or odd facial movements.
- Use AI Detection Tools: Tools like Microsoft's Video Authenticator can analyse videos and provide a likelihood score of manipulation.
- Reverse Image Search: Use reverse image search engines to find the original source of the vide.
Combating Video Deepfakes.
- Digital Watermarks: Embed watermarks in videos to indicate authenticity.
- Report and Take Down: Report deepfake videos to platform administrators and request takedowns.
- Legal Action: Legal action is an option, if deepfakes are used to harm or defame individuals.
Taking Down and Reporting Deepfakes
Monitor and Remove Unauthorised Content.
- Image Tracking Tools: Use reverse image search engines (like Google Images) to search for your images online. This helps you track where your images are being used.
- DMCA Takedown Requests: If your images are used without consent, you can file DMCA (Digital Millennium Copyright Act) takedown requests with the platform where the image is posted. In the UK, you can also request removal under data protection laws.
Educate and Advocate.
- Report Abuse: Report any instance of deepfake abuse to the relevant platform immediately.
- Legal Action: Deepfakes used to harass, defame, or intimidate are illegal in many jurisdictions. In the UK, laws surrounding cyber harassment, defamation, and the sharing of intimate images without consent may apply.
Deepfakes - Legal Position. I am not lawyer but my research suggests that the following legal protections exist in the UK.
- There have been amendments to the Data (Use and Access) Bill.
- Another addition is a possible two-year prison sentence under the Sexual Offenses Act for creating sexual deepfakes.
- The Online Safety Act makes it illegal to share or threaten to share non-consensual sexual images on social media.
I thought this article by Euro News in June 2025 was quite informative about UK and other European laws.
Related Deepfake Security Measures
Be Selective with Connections: Only accept friend requests or follows from people you know and trust.
Metadata Scrubbing - Remove Metadata: Ensure all personal or identifying metadata is removed from images before posting. Some photos may include hidden data like geolocation, device details, or time stamps that could aid in identifying or targeting individuals.
Social Media Platforms
Here are some things to look out for that indicate a social media platform may be lower risk in terms of deepfakes.
- Clear Policies and Reporting Mechanisms: Platforms with explicit policies against deepfakes and easy-to-use reporting mechanisms are more likely to take swift action.
- Look for platforms that provide clear guidelines on what constitutes a deepfake and how users can report them.
- Detection Technology: Platforms that invest in AI and machine learning technologies to detect deepfakes are better equipped to identify and remove such content quickly.
- Check if the platform mentions the use of such technologies in their safety or content moderation policies.
- Transparency and Accountability: Platforms that are transparent about their content moderation processes and provide regular updates on their efforts to combat deepfakes are more trustworthy.
- Look for platforms that publish transparency reports detailing the number of deepfakes detected and removed.
- Avoid platforms that host explicit content and/or are known for being used by bad actors who create and share deepfakes.
- Collaboration with Experts: Platforms that collaborate with external experts, researchers, and organisations to improve their detection and response to deepfakes are more proactive.
- Check if the platform mentions partnerships with academic institutions or cybersecurity firms.
- User Education and Awareness: Platforms that prioritise educating users about the risks of deepfakes and how to spot them are more likely to foster a safer environment.
- Look for platforms that provide resources and tips on identifying deepfakes.
- Rapid Response Time: Platforms that have a track record of quickly responding to reports of deepfakes and removing such content are more reliable.
- Check user reviews and feedback to gauge the platform's responsiveness.
- Other Safety Features. Platforms with ephemeral imagery are safer. That is, photos and videos disappear after being viewed. Some platforms (e.g. Discord) offer private and invite-only spaces, giving you greater control over who accesses your images.
Other risk indicators – allowing anonymous users, public profiles being easily accessible, images being easy to download, and imagery being made public by default.
AI Deepfake Resources & Support
My Image, My Choice - links to a range of resources on support and content removal. However, it's US focussed. If you've been affected by intimate image abuse, you can get advice and support from the Revenge Porn Helpline.
TED Talks - How to spot fake AI images (13 mins, Apr 25).
Other Online Harm Resources
I like The Space, who are an arts and culture organisation. They produced a series of resources to help arts organisations counter online abuse, which may also be of use to others.
Weird is Your Friend- Keeping People Safe
There are lots of new risks and some are very convincing but a consistent theme I've found is what I refer to as - weird is your friend. AI often struggles with what a human finds easy to grasp and no matter how clever the scam, they still have to get you to do something to make it work, so don't. Below is my attempt to create simple, clear advice that'll help to keep your mum, or your little brother safe.
- If it doesn’t look or feel right, don’t trust it.
- If it looks/sounds too good to be true, it is too good to be true.
- If they make it seem urgent & ask for sensitive information or money, it’s a scam.
- Do not click links or open documents, unless you’re sure these are safe.
- Avoid sharing sensitive personal information online.
- If in any doubt, ask someone you can trust for advice before you do anything they ask you to.
I'd very much welcome any suggestions on how to improve it - ian@charityexcellence.co.uk.
This article on deepfake AI photos and videos is for general interest only and does not constitute professional legal or financial advice. I'm neither a lawyer, nor an accountant, and am not a tech security expert either so not able to provide this, and I cannot write guidance that covers every charity or eventuality. I have included links to relevant regulatory guidance, which you must check to ensure that whatever you create reflects correctly your charity’s needs and your obligations. In using this resource, you accept that I have no responsibility whatsoever from any harm, loss or other detriment that may arise from your use of my work.
If you need professional advice, you must seek this from someone else. To do so, register, then login and use the Help Finder directory to find pro bono support. Everything is free.
Ethics note: AI was partially used in researching this guide.