Ceres mayor’s AI-enhanced cleanup photo backfires as residents spot the fakery

Mayor Javier Lopez posted a before-and-after photo of a trash cleanup, but residents noticed AI had scrubbed the site better than crews did. The controversy raises questions about transparency.
A dumpster behind an abandoned Rite Aid in Ceres, California, is still filthy. The concrete is stained, the trash is scattered, and the doors are open. But if you looked at the before-and-after photo posted on social media by Mayor Javier Lopez, you would have thought the city had power-washed the place, closed the doors, and left it spotless. You would have been wrong.
Residents noticed the discrepancy immediately and called it out. The photo, posted to showcase a cleanup effort, had been altered using AI. The mayor admitted it — but the damage to trust was already done. The incident, covered by CBS News Sacramento, offers a small but vivid case study of what happens when public officials use generative AI tools to embellish reality.
The photo that didn't match reality
The image in question was a typical before-and-after comparison meant to show progress at a chronic dumping site. The “after” shot showed cleaner ground, closed doors, and an overall more tidy appearance. But residents who knew the location saw the truth: the site looked barely different from the “before.” The concrete hadn't been washed. The doors were still ajar. The cleanup — if it happened — didn't look anything like the picture.
“More transparency and honesty should come from our top leaders,” one resident told CBS. Another put it bluntly: “Don’t come in and say that you fixed it when you haven’t.”
Mayor Lopez later confirmed that he had used AI to “enhance” the image. He said he lightened the picture and altered the ground, and that he had also closed the doors in the photo. The result, he acknowledged, made it look like the concrete had been power-washed, when in reality it had not. Code enforcement had removed 550 pounds of debris from the site, he said, but the visual cleanup of the ground was purely digital.
The AI label that went unnoticed
Lopez defended himself by pointing out that he had left the “AI label” on the post. Under Instagram and Facebook rules, content created or significantly altered with AI tools must be marked. He argued that this constituted transparency. But critics countered that an automated label in small type is not the same as an honest admission — especially when the image is presented as official government communication about a taxpayer-funded service.
City Councilwoman Serena Otero, whose district includes the site, was not satisfied. She noted that the AI-altered photo also removed a long-standing notice taped to the door — a detail that residents knew was real. “This isn’t exactly cleaned up like we thought it was, and for me, transparency is key,” she said.
Lopez, for his part, said he would do things differently next time. “We have to take accountability,” he told the station. “Next time, I’ll make sure that I come down here with the power washer to clean the concrete.”
What the controversy reveals
The Ceres incident is not an isolated case of AI misuse in local government, though it is a relatively small one. What makes it noteworthy is that it cuts to the heart of a growing problem: as AI image tools become easier to use, the line between honest documentation and digital propaganda blurs.
Public officials are increasingly tempted to use generative AI to polish up reports, social media posts, and even official documents. A mayor enhancing a cleanup photo may seem trivial compared to deepfakes used in political disinformation, but the principle is the same. When a government official presents an altered image as evidence of work done, they break the basic social contract of honest representation. Citizens cannot make informed judgments about their leaders if the photographic record has been manipulated.
There is also a practical concern. If residents cannot trust before-and-after photos, they may stop reporting problems altogether, or they may demand independent verification of every claim — an impossible burden on small-city staff. The backlash in Ceres, while limited to a single Facebook post, sent a message that trickery will not be tolerated.
What happens now
The city says the ultimate responsibility for the dump site falls on the property owner, whom officials have contacted. Code enforcement did perform a physical cleanup — but not of the spot shown in the mayor’s photo, according to the report. The mayor’s office now faces a credibility gap that will take time to repair.
For other municipal leaders, the lesson is straightforward: AI tools are powerful but dangerous in official communications. If a photo is enhanced, the enhancement should be clearly described in plain language, not hidden behind an algorithmic label. Better yet, leave the AI out of it. A slightly less flattering before-and-after that shows real progress is worth more than a perfectly pristine image that turns out to be a lie.
SysCall News has previously covered how AI-generated content is creeping into local government press releases and school district newsletters, often without adequate disclosure. The Ceres incident is a reminder that the technology is moving faster than the norms we use to govern it. Until those norms catch up, any official who reaches for an AI filter ought to ask themselves: if the truth doesn’t look good enough, maybe the work isn’t done yet.
Staff Writer
Maya writes about AI research, natural language processing, and the business of machine learning.
Comments
Loading comments…



