Artificial Intelligence, a Novelty that will Become Inevitability for Lawyers[1]

Download Complete PDF

By David Langham[2]

Large language models (LLM) have been foreseen and discussed for over a decade.[3] Nonetheless, their debut in late 2022 caught many unawares and unprepared. Lawyers who ignored them are now struggling to catch up, while “blind-faith” early adopters are recovering from professional embarrassment, and worse. Enthusiasm and wonder soon gave way to doubt and reservations when lawyers were sanctioned for hallucinations they had gullibly or trustingly included in court filings. The penalties included financial, reputational, and corrective components.[4] Other instances resulted in dismissal of actions.[5] Hundreds of such examples of AI hallucination in the U.S. have now been documented, commentated, and referenced.[6]

Lawyers soon splintered into enthusiastic early adopters, cautious experimenters, and risk-averse resisters.[7] The reality remains: LLM are present, evolving, improving, and compelling. The future of legal practice will include LLM, specialized legal research tools, and likely future tools that we do not currently perceive, predict, or comprehend. Therefore, it is critical for the legal profession to recognize the strengths and advantages that LLM can bring. The following engagement advice is offered to facilitate understanding of what LLM are what they empower, and those points are followed by the ten commandments lawyers must follow to avoid the triple threat of embarrassment, discipline, and malpractice.

Practical Strengths of AI Engagement
for Legal Professionals – How to Engage Large Language Models

Use it to Start, not to Finish: LLM excel at generating outlines, bullet points, memos or first drafts. This is a product of training and repetition, not innate creativity.[8] Know that they will be flawed, unrefined, and are likely to include errors of both authority and analysis, as much or more so as humans will be. Any production of rudimentary and foundational work can be dangerous if accepted verbatim. Nonetheless, such input and inspiration can be a valuable tool in the hands of a professional who applies judgment, knowledge, and criticism to the efforts of refining, reorganizing, and verifying. LLM are a head start on the hard work, not a substitute for it.

Use the LLM as a Critic and a Foil. Use LLM to spot the lawyer’s shortcomings or blind spots. The software may brainstorm various objections, counterpoints, and perspectives that the lawyer overlooks or discounts. The lawyer should always fear their own inherent predispositions and blind spots. The LLM is an excellent lens through which critique and comment can be gathered quickly, inexpensively, and even confidentially if approached with deliberate care. In this, it is critical that appropriate prompts (instructions) are used, and best if multiple LLM and prompts are used, the better to stimulate conversation and challenge. In the same vein, using the same LLM to brainstorm and critique may be less effective. Humans and LLM alike may suffer from predisposition or the simple draw to their own drafting, logic, or structure, conscious or not. Engaging multiple LLM platforms for comparison and contrasting is as productive as surveying various humans for reception or feedback.

Count on the LLM to Handle Volume. The LLM is able to evaluate reams of documents, depositions, records or correspondence more rapidly than any team of professionals. This may include highlighting the location and context of particular words or phrases, producing summaries of entire populations, or comparing and contrasting factual statements or conclusions. This capability can assist with prioritizing, organizing, and facilitating human review and analyses.

Have the LLM Analyze both Writing and Likely Responses. The LLM can efficiently interpret tone and assure the brief reads like one, while the letter does not. The audience for legal documents matters, and word choice communicating with other lawyers may differ from the appropriate vocabulary choices with clients, witnesses, or others. The readability and voice are easily evaluated by an LLM. The tool can also use a document or catalog of documents to establish a particular writer’s patterns and proclivities, and make suggestions about a work in progress with a goal of broader consistency. In a more micro-focus, the tool can render similar suggestions for maintaining consistency within the pleading, summary, or other document to facilitate reader comprehension and retention.

Leverage your own or your Firm’s Expertise. Avoid the internet and all that may reside there; point the LLM to your own work product to identify past successes, potential challenges, and even patterns in substance or procedure. When focused on internal sets of a law firm’s prior work product, the LLM can rapidly locate arguments and theories previously explored or developed, providing research and comparison guidance in both instances of past success and shortcoming. This may be employed with existing LLM tools or with specialized LLM platforms designed for lawyers or even proprietary to a firm.

Pair Current Legal Research Tools and Your Best Practices. It is critical lawyers understand the strengths and limitations of tools like Westlaw or Lexis, like speed, currency, and accuracy. These are complimentary to the strengths of LLM, like fluency, composition, and completeness. Competent and competitive lawyers must know and appreciate the strengths and potential shortfalls of each; use the research tools for the “what” and the LLM for the “how.” Combining the currency and reliability strengths of legal tools with the drafting skill of an LLM can produce stronger drafting and minimized potential for hallucination, misperception, or confusion. Critically, this strategy can reduce or minimize, but cannot eliminate, risks or challenges. Both LLM and humans can misconstrue or “hallucinate” the logic, ratio decidendi, or even the holding. Integration of the two tools empowers and facilitates LLM engagement, but does not replace the human discretion and judgment in the final product.

The Ten Commandments of AI Engagement for Legal Professionals: Precautions with Large Language Models (LLM)

I. Thou Shalt Not Take LLM Output on Faith

Citations, holdings, and even case facts can be hallucinated. If the lawyer fails to check LLM output and therefore falsity or misstatement is filed, consequences will likely follow. Best scenario, your case is lost; worst scenario, your reputation is paraded in the news. LLM predict words. They do not think; they replicate. Like a research assistant or paraprofessional, they may know just enough to scratch the surface. They do not verify facts or exercise judgment. They may lack nuance or discretion. If you do not independently check every case name and statutory reference against the primary source, you are one unfiled motion away from a sanctions hearing.[9]

II. Thou Shalt Not Surrender Thy Professional Judgment

AI is a tool, not a colleague. Computers and software can provide "starting points." But interpretation, analysis, nuance, and judgment must be yours. Too often, the starting points for LLM responses are internet sources of dubious value, credibility, and provenance.[10] Your name is on the filing, and your license is on the line. Blind delegation to a machine does not absolve you of the duty to think any more than delegation to a paralegal or clerk does.[11]

III. Thou Shalt Not Feed the Machine Thy Client’s Secrets

The prompt box is not a vault; it is a display window. Putting confidential client information into any third-party LLM, storage, or search engine may be as safe as emailing it to everyone in your address book. Even with an ironclad safeguard or promise of safety from that third-party, the responsibility and liability remain yours. Treat AI tools with caution and circumspection. Sanitize the client’s personal data and confidences from these tools to preserve privacy and attorney-client privilege.[12] Whatever information is submitted, in many LLM, can become the property[13] of the platform and may be retrievable[14] or even discoverable[15] as a result. Confidences and privileges must be omitted from what is shared with an LLM.

IV. Thou Shalt Not Mistake Fluency for Accuracy

Don’t mistake confidence for competence. A confident, cogent, coherent answer is not necessarily a correct one.[16] Humans naturally expound, articulate, and explain. LLM are specifically designed to emulate this also. Their ability to sound authoritative and convincing is not a glitch; it is a feature. Know also that all humans have personal predispositions.[17] Lawyers must remember also that the LLM may have them as well. Remain skeptical of the LLM output. Authoritative does not substitute for accurate.[18]

V. Thou Shalt Not Ignore the Knowledge Cutoff

The law is fluid; the LLM are evolving. LLM are persistently scanning, training, and expanding. Nonetheless, any tool will have limits. Some LLM have access to the internet and current information, while others may have trained only on information that is months or years old. Professionals must remember statutes are amended, and case interpretations and holdings are overturned. The tool you are using may not access or appreciate such changes, subtleties, and nuance. Always verify that output and authorities are current, or risk untoward or unfortunate outcomes.[19] Know that AI tools and legal rulings[20] surrounding them are persistently evolving, and currency is necessary from both technology and legal perspectives. If you are not sure how current a tool is, ask it.

VI. Thou Shalt Not Use AI Thou Dost Not Understand

Technological competence is mandatory and ongoing. Every competent user must understand the limitations and guardrails of their tools; are there answers it is instructed not to provide? Are there sources it cannot access? This does not mean we must all be coders. But you must know how to use the tool and how it accomplishes its result. Know that the security and privacy of various platforms can be inconsistent, and may depend on the user’s subscription or agreement terms. Enterprise-grade security is the ideal for client information, see commandment III.[21] No judge will accept the simple "the computer did it." If you can't explain the "how," you shouldn't be using the "what."[22] Tasks are delegable, but responsibility is not.[23]

VII. Thou Shalt Not Automate Thy Billing without Scrutiny

Fee transparency does not pause for the machine. The ethical bounds on billing have not changed. If a task takes twelve minutes, bill twelve minutes. If it would have taken two hours without the LLM, still bill twelve minutes. The LLM efficiency benefits the client and facilitates you moving on to other tasks, not quadrupling your billable hours through multiplication and misrepresentation. Adjust your practice to reflect the new reality of efficiency before a fee dispute forces the issue for you.[24]

VIII. Thou Shalt Not Treat All AI Tools as Equal

An LLM is not a legal research platform. There are platforms built for legal support, designed for lawyers. These are trained on legal information and often use closed populations of legal information isolated to jurisdiction, practice, or process. Know the capability and construction of your tool. Learn how your tool was trained and how it functions before you and your client rely on it. Using a toy to do a craftsman’s job is a dereliction of duty that no "error and omissions" policy is eager to cover.[25]

IX. Thou Shalt Not Neglect to Document Thine AI Use

Show your work. Documentation may become critical in supporting the client and yourself. Protect each by maintaining a record of what, how, and when an LLM was used. This includes the platform used, the prompts (questions), as well as how you protected privacy and privilege and verified the accuracy of the output. If your work is ever challenged, “I verified everything” only holds water if you can prove it. “If you didn’t document, it didn’t happen.”[26]

Beyond internal documentation and professional protection, various courts have adopted requirements for certification in each filing about the use of AI.[27] That trend may wax or wane, and there is currently significant discussion about how courts will address the challenges of misused LLM and attorney over-reliance, blind trust, and inattentiveness.

X. Thou Shalt Not Forget That Opposing Counsel Hath AI Too

Complacency is a competitive liability. Tools are designed to act and react, question and answer. Practice will be faster and increasingly reliant on tools. Each party will be enticed by LLM strengths and susceptible to these challenges. The professional’s edge lies in using these tools to sharpen, expand, and grow personally, not to yield, delegate, or blindly trust. The human must always remain “in the loop,” as instigator, critic, and ultimate judgment.[28] Intellect will only grow and thrive with challenge and engagement. Only complacency, not the machines, will bring you embarrassment and ruination.[29]

Conclusion
The reality for most lawyers will be how and when to leverage the advantages of the LLM. All must accept that there are benefits and challenges, and that juxtaposition will likely persist. The tools offer speed, breadth, efficiency, and advantage. The advantages must be weighed against the potential shortcomings, experiential deficit, and challenges. Following the advice here, the lawyer should be able to consciously and persistently strive for best practices while remaining acutely aware of avoiding perils and penalties. Technology will continue to evolve, and so will the practice and challenges. Evolution is persistent, and the pace with LLM is frantic at times. The future of legal practice belongs to the lawyer who uses AI to sharpen their strengths, not the one who uses it to replace their pulse. Engagement is no longer optional; it is an inevitability.

_____________

[1] The contents of this paper were drafted without the aid or influence of any non-human. In the course of editing, various LLM platforms were engaged as proofreaders, critics, and devil’s advocates consistent with the advice on LLM engagement rendered herein, see endnote 12.[2] Judge Langham has presided as judge of compensation claims since 2001 and served as deputy chief since 2006. He has written multiple books, dozens of articles, and thousands of blog posts. He has presented thousands of professional lectures and panels across the country, and striven to harness the technological revolutions or our age.
[3] Langham, Chatbot Wins 160,000 Legal Cases - the "Future is Now," Florida Workers’ Comp Blog, June 30, 2016. https://dwlangham.blogspot.com/2016/06/chatbot-wins-160000-legal-cases-future.html, last visited March 19, 2026.
[4] Mata v. Avianca, Inc., 22-cv-1461 (S.D.N.Y. 2024), https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/, last visited March 19, 2026.
[5] Kruse v. Karlen, EDI 111172, (Mo. Ct. App. 2024); Samuel v. Winsley Focia, B346730, 2026 WL 539183 (CA Ct. App. 2nd 2026); Devita v. Midtown Motors, 1:25-cv-435-RAH-KFP (M.D. Ala 2026); as part of grounds in Lu v. Capital One, 1:25-cv-1057 (E.D. Ohio 2026); Dillard v. CBS Studios, 8:25-cv-02091-JAK (C.D. California 2026); Hardy v. Whitaker, 1:24-cv-11270, 2026 WL 575225 (E.D. MI 2026); Volker v. Nygaard, 20250309, 2026 WL 533638 (ND 2026).| 
[6] https://www.damiencharlotin.com/hallucinations/?sort_by=case_name&states=USA&period_idx=0, last visited March 19, 2026.
[7] Creating psychological safety in the AI era, MIT Technology Review Insights, December 16, 2025, https://www.technologyreview.com/2025/12/16/1125899/creating-psychological-safety-in-the-ai-era/?gad_source=1&gad_campaignid=16701804390&gbraid=0AAAAADgO_mhFtli9G5QbPtstBAyGVvNgM&gclid=Cj0KCQjwmunNBhDbARIsAOndKpnKic7_O88Yp2Mb6FbSMAQMnhp9LL4fzLR4zXPiJJLaTKKZoGzSPHcaAhtiEALw_wcB, last visited March 19, 2026 (“Psychological barriers are proving to be greater obstacles to enterprise AI adoption than technological challenges”).
[8] Nikolai Berdyaev, The Meaning of the Creative Act, (1917); see also Andrew McDiarmid, Why AI Can’t Create Like we can, MindMatters, March 8, 2026, https://mindmatters.ai/2026/03/why-ai-cant-create-like-we-can/, last visited March 19, 2026 (“AI is necessity by definition, wholly lacking in the freedom from which true creativity emerges”).
[9] ByoPlanet Int'l, LLC v. Johansson, 792 F. Supp. 3d 1341, 1353 (S.D. Fla. 2025)(Fla.R.Fed.R.Civ.P. 11(c)(1), (3); R.Reg.Fla.Bar Rules 4-1.1, 4-1.3, 4-3.3, and 4-8.4(c)).
[10] Gary N. Smith, Large Language Models May Soon be Smarter than Humans, MindMatters May 9, 2025, https://mindmatters.ai/2025/05/yes-large-language-models-may-soon-be-smarter-than-humans/, last visited March 19, 2026 (“searching the internet is of little help because the internet is a swamp polluted by fiction and falsehoods”).
[11] United States v. Cohen, 724 F. Supp. 3d 251, 259 (S.D.N.Y. 2024)(client “was entitled to rely on his counsel and to trust his counsel's professional judgment” regarding AI.); Park v. Kim, 91 F.4th 610, 614 (2d Cir. 2024)(“All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure. Among other obligations, Rule 11”).
[12] See e.g., Florida Bar Ethics Opinion 24-1 (2024); Alaska Bar Assn. Ethics Comm., Ethics Op. 2025-1 (2025); Pa. Bar Assn. Comm. on Legal Ethics & Pro. Resp., Informal Op. 2024-200 (2024); Cal. St. Bar Standing Comm. on Pro. Resp. & Conduct, Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (2023); In re Richburg, 671 B.R. 918, 919 (Bankr. D.S.C. 2025); United States v. Heppner, No. 25 CR. 503 (JSR), 2026 WL 436479, at *2 (S.D.N.Y. Feb. 17, 2026)(putting information into LLM waived privilege: “In the absence of an attorney-client relationship, the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege.”).
[13] Sam Ghosh, Who Really Owns Your Data? Examining Data Ownership Rights in the Age of AI, Medium, October 3, 2025, https://samghosh.medium.com/who-really-owns-your-data-examining-data-ownership-rights-in-the-age-of-ai-171ba01b9e51, last visited March 19, 2026.
[14] Do LLMs 'store' personal data? This is asking the wrong question, International Association of Privacy Protections, October 23, 2024, https://iapp.org/news/a/do-llms-store-personal-data-this-is-asking-the-wrong-question, last visited March 19, 2026.
[15] United States v. Heppner, 1:25-cr-00503, (S.D.N.Y.); see also Pamela Langham, AI Platforms and Confidentiality: A Closer Look At United States v. Heppner, Maryland State Bar Association, February 25, 2026, https://www.msba.org/site/content/News-and-Publications/News/General-News/AI_Platforms_and_Confidentiality_A_Closer_Look_at_United_States_v_Heppner.aspx, last visited March 18, 2026; but see Warner v. Gilbarco Inc. and Vontier Corp., 2:24-cv-12333 (E.D. MI)(“AI programs are tools, not persons, even if they may have administrators somewhere in the background”).
[16] Celina Zhao, AI Hallucinates because it’s trained to fake answers it doesn’t know, Science, October 28, 2025, https://www.science.org/content/article/ai-hallucinates-because-it-s-trained-fake-answers-it-doesn-t-know, last visited March 19, 2026; see also Gary N. Smith, Large Language Models May Soon be Smarter than Humans, MindMatters May 9, 2025, https://mindmatters.ai/2025/05/yes-large-language-models-may-soon-be-smarter-than-humans/, last visited March 19, 2026.
[17] David Langham,
Unseen Influence: Unconscious Predisposition in Dispute Resolution (2025)(“lists 151 predispositions”).
[18] Sally McLaren and Lily Rowe, “You’re right to be skeptical!”: The Role of Legal Information Professionals in Assessing Generative AI Outputs, Legal Information Management, Cambridge Core, Volume 25, Issue 1, June 2025; see also Adnan Masood, PhD., Counsel, Code, and Credibility: An Evidence-Based Narrative on AI’s Transformation of Legal Work, Medium, September 25, 2025, https://medium.com/@adnanmasood/counsel-code-and-credibility-an-evidence-based-narrative-on-ais-transformation-of-legal-work-9c56f46dd554, last visited February 17, 2026.
[19] Scott Boring, Solving the LLMSKnowledge Cuttoff Issue with One Simple Message, Medium, June 9, 2025 (“they come with a knowledge cutoff—an artificial boundary beyond which they lack current information”); Shumailov, I., Shumaylov, Z., Zhao, Y. et al., AI models collapse when trained on recursively generated data, Nature 631, 755–759 (2024), https://doi.org/10.1038/s41586-024-07566-y, last visited March 19, 2026 (“’collapse’ occurs when AI is trained by AI”).
[20] See endnote 15.
[21] Pamela Langham, Gemini for Lawyers: Comparing Free, Pro, and Ultra AI Tools for Legal Practice, Maryland State Bar Association, September 16, 2025, https://www.msba.org/site/site/content/News-and-Publications/News/General-News/Gemini_for_Lawyers_Comparing_Free_Pro_and_Ultra_AI_Tools_for_Legal_Practice.aspx, last visited March 19, 2026.
[22] ABA Issues First Ethics Guidance on a Lawyer’s use of AI Tools, American Bar Association, July 29. 2024, https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/, last visited January 2, 2026; See also Cal. St. Bar Standing Comm. on Pro. Resp. & Conduct, Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (2023)(citing Rule 1.1 and 1.3 and noting “Before using generative AI, a lawyer should understand to a reasonable degree how the technology works, its limitations, and the applicable terms of use and other policies”).
[23] See ABA Model Rules of Professional Conduct, Rule 5.1 Responsibilities of Partners, Managers, and Supervisory Lawyers, https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_1_responsibilities_of_a_partner_or_supervisory_lawyer/ and Rule 5.3 Responsibilities Regarding Nonlawyer Assistance, https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_3_responsibilities_regarding_nonlawyer_assistant/, last visited March 19, 2026.
[24] William Josten, Is ABA Formal Opinion 512 off the mark? And if so, what can law firms and GCs do about it?, Thompson Reuters, April 2, 2025 (“if a lawyer using GenAI only spends 15 minutes to draft a pleading, that lawyer may only charge for that 15-minute period, plus whatever time the lawyer spends reviewing the draft”).
[25] Green Bldg. Initiative, Inc. v. Peacock, 350 F.R.D. 289, 291 (D. Or. 2025)(“fake case citations appear to be hallmarks of a generative artificial intelligence (‘AI’) tool, such as ChatGPT”; “general-purpose large language models hallucinate at least 75% of the time when answering questions about a court's rulings”).
[26] Samuel Edwards, Legal AI Audit Trails: Designing for Traceability, Law.co, https://law.co/blog/legal-ai-audit-trails-designing-for-traceability, last visited March 19, 2026 (“An AI audit trail is essentially a record of the decisions, inputs, outputs, and sometimes even the internal computations of an AI system. Properly set up, the audit trail will let you track from start to finish how an AI reached a particular recommendation or result.”)
[27] See Generative Artificial Intelligence (AI) Federal and State Court Rules Tracker, Practical Guidance, Lexis, https://advance.lexis.com/open/document/openwebdocview/Generative-Artificial-Intelligence-AI-Federal-and-State-Court-Rules-Tracker/?pdmfid=1000522&pddocfullpath=%2Fshared%2Fdocument%2Fanalytical-materials%2Furn%3AcontentItem%3A69RB-NV01-JJD0-G00F-00000-00&pdcomponentid=500749, last visited March 19, 2026.
[28] Cole Stryker, What is Human in the Loop?, IBM Think, IBM.com, https://www.ibm.com/think/topics/human-in-the-loop, last visited March 19, 2026 (“
Human-in-the-loop (HITL) refers to a system or process in which a human actively participates in the operation, supervision or decision-making of an automated system… to ensure accuracy, safety, accountability or ethical decision-making”).
[29] Ganesh Padmanabhan, The Complacency Paradox: Trusting AI Without Losing Your Edge, Forbes, April 2025 (knowledge workers … don’t intentionally become complacent, but as they trust AI more and more, they tend not to critically analyze any output.”); Parasuraman R, Manzey DH. Complacency and bias in human use of automation: an attentional integration, Human Factors. 2010;52(3):381–410. doi: 10.1177/0018720810376055; Goddard K, Roudsari A, Wyatt JC, Automation bias: a systematic review of frequency, effect mediators, and mitigators, Journal of the American Medical Informatics Association, 2012;19(1):121–127. doi: 10.1136/amiajnl-2011-000089 (“failing to appropriately correct their mistakes [automation bias and automation-induced complacency."]