How to spot AI-edited photos

From fabricated war photos to altered celebrity images, AI has made it harder than ever to know whether a picture is real. Just last week in Dhaka, Dhaka Metropolitan Police (DMP) insisted that an image of clashes between officers and students was fabricated by using AI – a claim which was later disputed by journalists on the ground.
The dispute over a single frame reflects a broader challenge now gripping platforms worldwide: how to know if an image is real. Here are a few things to keep in mind on this matter.
Provenance before pixels
Verifying an image begins not with visual inspection but with provenance. Meaning, who published it first, and where else it has appeared. Tools such as Google's 'About this image' and reverse image search services like TinEye, Google Lens, and Bing's Visual Search can reveal whether a photo has been repurposed, altered, or entirely fabricated.
These checks often expose recycled or synthetic content pretending to be news. If an image that claims to document breaking events first appeared on a stock library, art portfolio, or obscure forum, it is likely misleading.
Metadata and its limitations
Image files also contain metadata which are technical details such as camera type, capture time, and editing history. While these can support a coherent backstory, they can also be missing or deliberately altered. Metadata can be seen as one element of corroboration, not a final verdict.
If an image's metadata does not match the event it claims to show, that can be a warning sign. But if the data is missing, it does not necessarily mean the photo is fake.
Labelling and digital signatures
Technology companies and publishers are working to make provenance more visible. The Coalition for Content Provenance and Authenticity (C2PA) has developed a standard for cryptographically signing content and recording edit histories. Adobe's Content Credentials initiative applies these principles, allowing viewers to check who captured an image and what edits were made.
Camera manufacturers such as Nikon and Leica have begun embedding these verification features, though adoption remains patchy. Google has launched its own watermarking system, SynthID, designed to detect images created with its AI tools. Platforms including Meta have also introduced labelling systems to flag synthetic media.
Why visual "tells" are no longer enough
For a long time, experts told people to watch for glitches in photos, like extra fingers, warped jewellery, or strange text in the background. These tips still help sometimes, but advanced AI tools make far fewer mistakes.
So, just because an image looks fine does not mean it is real. Instead, it is better to check if the scene makes sense with simple questions – does it fit with physics, culture, and other evidence around it?
How experts verify
While there are a few platforms online which help with photo verification, professional verification typically combines multiple approaches. Journalists and fact-checkers trace the first appearance of an image, cross-check it against other footage, examine weather and landmarks, and contact original sources.
Forensic tools like error-level analysis (ELA), sensor pattern noise (SPN) can spot signs of editing, but nowadays they work best when combined with provenance checks and other supporting evidence.
Why it matters
The spread of AI-generated or mislabelled images can weaken trust in authentic photographs, particularly in moments of protest, conflict or crisis. Many groups are pushing for a system where every photo shows its origin and AI content is clearly marked. But problems remain, since screenshots or misleading re-uploads can hide original information.
In the end, no single test can prove if an image is real. Verification depends on the bigger picture: where the image came from, how it spread, and what other evidence supports it.
Comments