When you habitually ask an AI large model "Which smart bracelet is good to use," what you may get is not an objective review, but a fake advertisement that has been carefully "fed" to the AI. In this year's 315 report, CCTV Economic Channel exposed a dark industry chain lurking in the era of AI: GEO (Generative Engine Optimization).

The so-called GEO was originally a technology aimed at improving the efficiency of information promotion, but in the hands of certain service providers, it has become a tool to manipulate AI and "brainwash" large models. Investigations found that there are already software such as "Liqing GEO Optimization System" on the market, which specialize in so-called "poisoning" services.

To verify the destructive power of GEO, an industry insider conducted a ridiculous demonstration on site: they created a fictional smart bracelet called "Apollo9" and fabricated marketing jargon with no scientific basis, such as "quantum entanglement sensors" and "black hole-level battery life." Then, they used the software to spread dozens of related soft articles across the Internet within a short time.

A shocking scene occurred: just a few hours later, when a journalist asked a mainstream AI large model about this smart bracelet, the AI directly grabbed these false pieces of information and recommended them as the "standard answer" to users, even calling it "the best in the industry." The AI, which should have been objective and neutral, completely lost its ability to distinguish truth from falsehood in the face of massive "false feeding."

According to GEO service provider executives, this business of "hunting" large models is extremely popular. Since AI algorithms sort information through cross-validation of multiple sources, as long as one hires a GEO company to produce a large number of soft articles from various angles, the AI will think "this is the truth." This method not only helps promote their own products but can also become a hidden weapon for smearing competitors.

Currently, companies such as Lisi Culture Communication Co., Ltd. have been named. This "poisoning" carnival in the AI era reminds us: although large models are powerful, if the source of the information they collect is polluted, AI will become a mouthpiece for false information. While technology is advancing rapidly, how to protect the purity of data has become an urgent issue that the entire industry must face.