Artificial intelligence is no longer a “future” concern for legal professionals, the courts, clients, and litigants. Since the summer of 2025, many in the profession have shifted from trial (no pun intended) and error to wide-scale adoption of AI tools. This transformation has been matched by rapid developments in regulation, judicial oversight, vendor disputes, and ethical expectations. AI is increasingly becoming a regulated tool, a potential source of liability, and a key factor driving changes in the delivery of legal services. This year-end article surveys the state of play shaping the practice of law and accompanying the increase in the use and integration of AI tools. It also provides practical guidance for firms seeking to operationalize responsible AI governance while capturing the efficiencies the technology now routinely offers.
Litigation and Rule Spotlights
In June 2025, the Advisory Committee on Evidence Rules recommended that proposed Federal Rule of Evidence 707 (“FRE 707”), which addresses the use of AI in generating evidence for litigation, be released for public comment. Proposed FRE 707, Machine-Generated Evidence, provides that “[w]here machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court must find that the evidence satisfies the requirements of Rule 702(a)-(d). This rule does not apply to the output of basic scientific instruments.” The explanatory Committee Note explains that FRE 707 is intended, in part, to ensure that reliability standards are satisfied for AI-generated evidence. The proposed rule would appropriately “address the circumstance in which machine-generated, expert-like conclusions are offered without an accompanying expert witness.” The takeaway for practitioners in the federal courts concerning FRE 707 is that the spirit of the proposed rule is to protect the integrity of evidence when AI is used.
The second half of the year also saw a number of high-profile cases that made headlines due to their connection with how deeply AI is embedded in the business and practice of law. One of the clearest signals came from the legal technology industry itself. On November 26, 2025, Fastcase, a legal publishing and research company owned by Clio, filed a complaint in federal court against the AI legal technology firm Alexi, alleging breach of contract and trademark infringement arising from a failed business relationship that evolved into direct competition. According to the complaint, Alexi was granted limited, internal-only access to Fastcase’s case law database for use by its staff attorneys in preparing client memoranda. The crux of Fastcase’s claim is that Alexi used Fastcase’s proprietary legal data to train, build, and scale its own competing AI legal research platform in violation of licensing restrictions. The case has already spurred conversations about training-data provenance, vendor due diligence, and the limits of permissible data use.
For example, performing adequate due diligence is key when selecting a vendor and purchasing AI products and licenses. The vendor agreement often governs whether client data may be used for model training, what security measures apply, how data is retained or deleted, and whether there is indemnification for intellectual property or privacy claims. The Fastcase v. Alexi litigation illustrates the importance of negotiating clear and unambiguous contract terms regarding use of training data, restrictions on model development (if any), and the vendor’s obligation to defend and indemnify the firm if the model’s operation triggers litigation.
Practical Ethical and Risk-Management Obligations
The legal community has also seen an increase in the number of reports and preliminary guidance issued by state bar committees and courts addressing risks associated with AI in litigation. The various publications have addressed topical issues such as discovery involving machine-generated outputs, expectations for attorney verification of content, the need for standards regarding transparency when AI tools are used, and how to navigate preservation concerns. See, e.g., “Technology and the Future Practice of Law”, VA State Bar (June 9, 2025); “AI and Georgia’s Courts”, Findings and Recommendations of the GA Judicial Council Ad Hoc Committee (July 3, 2025); “Artificial Intelligence-Enabled Tools in Law Practice”, WA State Bar Advisory Op. 2025-05 (Nov. 20, 2025). These developments illustrate that institutional and regulatory clarity is no longer hypothetical. State judiciaries and bar committees are issuing guidance that impose concrete obligations on lawyers and firms. For attorneys, this means AI cannot be treated as merely an efficiency enhancer. It is an instrument that implicates duties of competence, supervision, confidentiality, and candor, among other obligations.
Competence in the use of AI now includes understanding how generative systems work, the nature of their limitations, and the specific risks associated with hallucinated information. Because AI tools are not legal researchers, lawyers must verify every authority and factual representation derived from them. Courts have made clear that failure to conduct independent verification may lead to sanctions, including for example, an award of attorney’s fees, referral to disciplinary authorities, or disqualification. The increase in hallucination incidents throughout 2025 further underscores that a “trust but verify” posture may be insufficient. The more appropriate minimum acceptable standard is to “verify first, then use”.
The Bottom Line
AI continues to reshape litigation risk, regulatory exposure, and practice economics. This past year has shown that the evolution is accelerating, and that courts, regulators, and clients now expect lawyers to treat AI with the same seriousness as other complex technical systems. Practitioners who build robust governance frameworks and incorporate AI with deliberate, ethical oversight may not only reduce risk, but also position themselves at the forefront of a rapidly evolving tool.
Reprinted with permission from the December 18, 2025 edition of The Legal Intelligencer © 2025 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or reprints@alm.com.