San Diego Police ban generative AI in official reports under new state law

San Diego Police Department prohibits generative AI tools in reporting, citing compliance with California's new disclosure requirements on AI use.
The San Diego Police Department (SDPD) has officially prohibited its officers from using generative artificial intelligence (AI) for drafting or compiling police reports. This new directive aligns with a California state law aimed at increasing transparency in the use of AI by public agencies.
California's new AI disclosure law
The state law, enacted just two months ago, requires all police departments in California to disclose the use of AI in generating or assisting with official reports. If AI is employed, departments must retain all AI-generated content for the life of the investigation and report. The law underscores public concerns regarding the ethical application and accountability of AI in law enforcement.
San Diego’s decision highlights a cautious approach to integrating this emerging technology into critical public services. By outright banning generative AI for now, the department ensures compliance with the state’s strict regulations while questions about trustworthiness and security in AI-generated content remain unresolved.
SDPD’s formal order
The SDPD issued its directive in December 2025, detailing the new policy in a training order obtained by CBSA. The department stated that employees are strictly prohibited from using generative AI tools unless specifically approved by the department administration and explicitly authorized within the official training policies. This measure seeks to prevent potential misuse or misunderstandings about the scope of AI tools within law enforcement processes.
What the ban entails
- Prohibited tools: Officers cannot use any generative AI programs (e.g., ChatGPT, Bard, or other similar technologies) to create or assist in preparing police reports.
- Conditional approval: AI use is only permitted if the department reviews and sanctions specific applications.
- Compliance with training policies: Approved AI tools must comply with official department training requirements.
Why some are cautious about AI in policing
The use of artificial intelligence in public institutions like law enforcement raises significant ethical and practical questions. Generative AI systems are not immune to errors, which could lead to inaccuracies in critical documents such as police reports. Moreover, concerns about bias, accountability, and data security make the adoption of such tools a contentious issue.
Public agencies face added scrutiny due to the high stakes of their responsibilities. Mistakes in police reports could have lasting legal and societal repercussions. These risks have made some departments hesitant to adopt these technologies without thorough vetting, and San Diego’s complete prohibition reflects that cautious stance.
Furthermore, retaining AI-generated material as legally mandated creates additional logistical challenges. Departments would need secure and expansive data storage capabilities and systematic oversight to ensure compliance with the record retention provisions of the state law.
How other departments are responding
While San Diego Police banned generative AI completely, not all police departments in California have imposed similar restrictions. Some are exploring limited use of AI in non-critical applications, such as managing administrative tasks. The state law, however, applies uniformly, requiring every policing agency to document and disclose AI use wherever deployed.
Comparison: Generative AI in law enforcement
| Department | Policy on Generative AI | Action Plan |
|---|---|---|
| San Diego Police Department | Ban on generative AI except pre-approved cases | Prohibit use; adherence to new state AI policy |
| Los Angeles Police Department | Reviewing potential use of AI in back-office functions | Pilot programs for non-investigative applications |
| San Francisco Police Department | Limited use in predictive analytics | AI-generated records to comply with data retention |
Key takeaways for law enforcement
- Disclosure requirements: California’s laws demand transparency in AI use, applying accountability measures for public institutions.
- Potential risks: Errors and ethical concerns, including bias and data security, make unchecked use of generative AI problematic.
- Proactive measures: Departments can avoid legal issues by adopting clear guidelines, building oversight systems, and being selective about where AI is implemented.
These steps signal that the broader application of generative AI in police work will be a cautious, deliberate process, at least in jurisdictions like California.
Implications for the future
The SDPD’s outright ban highlights a growing trend toward stricter governance over emerging technologies in the public sector. As AI tools become more prevalent, balancing their potential benefits against the risks will remain a priority for policymakers and law enforcement leaders.
For now, transparency and strong oversight are the watchwords for integrating AI into public service. San Diego’s step provides a clear example of how departments can pause and implement comprehensive policies while adapting to rapidly evolving technology. How other departments handle the same challenge will shape the debate on AI in public service across the country.
Staff Writer
Chris covers artificial intelligence, machine learning, and software development trends.
Comments
Loading comments…


