🤖 AI & Software

Lawyers and artificial intelligence: new rules reshape legal practice

By Chris Novak6 min read
Share
Lawyers and artificial intelligence: new rules reshape legal practice

AI tools are transforming legal practices, but ethical rules and regulations are shaping how they can be used safely and responsibly in courtrooms.

Artificial intelligence (AI) has already made its way into the offices of lawyers and courtrooms across Europe, fundamentally altering the way legal work is conducted. However, its rapid adoption has raised critical questions about responsibility, data security, and ethical challenges. With AI-generated errors and cases of "hallucinated" legal citations already appearing in court cases, regulatory bodies for lawyers have started taking decisive steps to establish boundaries for AI use. These efforts aim to ensure that AI aids lawyers without compromising ethical and professional standards.

The rise of AI in legal work

Over the past two years, generative AI tools have exploded in popularity among lawyers. These systems are employed for a variety of tasks, including legal research, drafting document templates, and analyzing vast troves of data. The allure of AI lies in its ability to accomplish these complex tasks quickly and with significant cost savings—qualities that are no small feat in a traditionally resource-intensive profession.

Yet, as with every innovation, adopting AI comes with risks. Errors introduced by AI, often called "hallucinations" (fabricated citations or incorrect information), have already drawn scrutiny in legal cases. Furthermore, instances of lawyers falling victim to AI-generated inaccuracies have led to professional misconduct inquiries and even penalties for "reckless litigation." These early stumbles have spurred legal organizations to act before AI’s misuse becomes widespread.

Advertisement

The HOROS framework: pioneering AI regulation

Italy's legal sector has taken a proactive stance in addressing these challenges. A key first step came in December 2024 with the release of the HOROS guidelines by the Milan Bar Association. HOROS, which translates to "boundary," establishes clear principles for using AI tools in legal practice. It rests on four main pillars:

  • Transparency: Lawyers must understand how the AI tools they use function and highlight the limitations of these systems.
  • Human oversight: Legal judgment must remain firmly under human control. AI tools can assist but not make decisions.
  • Non-discrimination: Lawyers must vigilantly monitor for biases in AI algorithms, particularly those involved in predictive legal analysis.
  • Data security: Client data must remain protected, and lawyers are urged not to input sensitive case information into platforms without sufficient privacy guarantees.

Importantly, HOROS goes beyond codifying ethical standards. It also introduces a framework for monitoring how lawyers use AI tools over time, fostering ongoing evaluation and adaptation as technology evolves.

Expanding the regulatory landscape

Milan’s initiative sparked similar moves from other legal bodies across Italy. The bar associations in Rome and Turin developed their own documents to guide lawyers in navigating AI responsibly. On a broader scale, the Council of Bars and Law Societies of Europe (CCBE) published a comprehensive guide in October 2025 detailing Europe-wide recommendations for AI.

The CCBE guide emphasizes two critical risks in particular:

  1. Confidentiality: Every prompt entered into an AI system can unintentionally expose sensitive client data. This is a pressing concern given the lack of adequate contractual safeguards in many commercial AI platforms.
  2. Accountability for errors: Lawyers remain fully responsible for any mistakes generated by AI tools. Using AI does not absolve human practitioners of their duty to ensure legal accuracy and integrity.

Unlike HOROS, the CCBE’s guide is non-binding. Still, it carries considerable weight as a reference for good practice across legal jurisdictions.

Law n. 132/2025: a milestone for regulation

AI in the legal field took a major turn in October 2025, when Italy passed its first comprehensive AI legislation, titled Law n. 132/2025. This law introduces mandatory disclosure requirements for lawyers using AI in their practice. Specifically, lawyers must inform clients if and how they plan to use AI tools. They must also clarify that all legal responsibility for the results, including any errors from AI, rests with the lawyer.

To facilitate compliance, the Italian National Bar Council (CNF) distributed a standardized form for lawyers to share with clients. Signed by both parties, the form specifies the scope of AI use, maintains lawyer accountability, and reassures clients that their data will remain protected under GDPR (General Data Protection Regulation) standards. While this form is not mandatory, it has quickly become an unofficial standard across the country.

Key principles shaping AI adoption

Three consistent principles run through all the initiatives for AI in legal practice:

  • Human primacy: AI is a tool, not a decision-maker. Lawyers remain wholly responsible for any output, no matter how much AI is involved.
  • Transparency: Clients deserve to know if AI is being used and should be informed in clear, accessible language.
  • Non-delegable accountability: Responsibility ultimately rests with the lawyer. Errors generated by AI cannot serve as excuses.

These principles serve dual purposes: ensuring AI is ethically integrated into legal work and promoting public trust in a profession that stands at the intersection of justice and technology.

The road ahead

AI is no longer an optional add-on for lawyers—it has become a necessary part of modern legal practice. But with that necessity comes serious responsibility. The HOROS framework, the CCBE guide, and Italy’s new AI law set crucial standards in this emerging area. For lawyers, these documents are more than guidelines; they are a roadmap for navigating the complex balance between embracing technological innovation and upholding ethical duty.

This growing body of regulation sends a clear message: using AI irresponsibly is no longer an option. For the legal profession, the question is no longer whether to use AI, but how to ensure its use aligns with the values of justice, fairness, and accountability.

Advertisement
C
Chris Novak

Staff Writer

Chris covers artificial intelligence, machine learning, and software development trends.

Share
Was this helpful?

Comments

Loading comments…

Leave a comment

0/1000

Related Stories