Do you think you just casually posted a travel photo? Under the "keen eyes" of AI large models, this photo may be leaking your address, ID number, family relationships, or even your itinerary. During the 2025 World Internet Conference, China Central Television (CCTV) issued a rare high-risk warning: with the popularity of multimodal AI, seemingly harmless daily images have become a new black hole for privacy leaks, and ordinary users are almost completely unprepared for it.

Cybersecurity, Privacy

Image source note: The image is AI-generated, and the image licensing service provider is Midjourney

AI's "image reading" skills far exceed human imagination

Safety experts point out that modern AI can not only recognize faces, license plates, and text on documents but also infer sensitive information through context. For example, a photo containing an airplane boarding pass can allow AI to extract the name, flight number, seat number, and combine public data to speculate on the user's home city; a photo of a child's school uniform can allow AI to identify the school logo and associate it with the family address; even a blurry package box in the background may expose the recipient's full name and phone number.

More concerning is that researchers have discovered a new type of AI attack method: malicious attackers can embed "invisible prompts" into high-resolution images. When platform AI automatically compresses or down-samples the image, these instructions are accidentally activated, prompting large models to perform data theft, forgery, or leakage—users may not even realize they have "authorized" the AI to hand over data.

CCTV highlights three high-risk behaviors, everyone is affected

In response to current risks, CCTV has specifically listed three types of content that must not be shared:

1️⃣ Transportation tickets: train tickets, boarding passes, license plates, etc., which contain names, last four digits of ID numbers, and travel information;

2️⃣ Personal documents: high-resolution photos of ID cards, passports, driver's licenses, marriage certificates, etc., equal to sending your privacy "packaged and delivered" to hackers;

3️⃣ Real-time location + children/elderly information: posting photos of children or elderly people along with location information is extremely likely to be used for targeted fraud or kidnapping by criminals.

In the AI era, privacy protection requires "active defense"

Experts urge users to upgrade their digital security awareness:

Before posting images, be sure to blur key information (even the background needs to be checked);

Turn off automatic geotagging features on social media;

Be cautious when using third-party tools such as AI photo editing or AI image expansion to avoid uploading original images to unknown servers;

Family groups and WeChat Moments are not "safe zones"—once information is released, it cannot be reversed.

AIbase believes that the essence of this privacy crisis is a fierce conflict between technological convenience and data sovereignty. When AI can "read" your entire life from a blurry background, we can no longer relax our vigilance by saying "I haven't done anything wrong." In this era where "everything can be analyzed," protecting privacy means protecting personal safety. Every time you click "send," it should be a well-considered action.