Can U.S. Lawyers Ethically Use AI Under ABA Rules? A Practical Guide by Use Case
Explore the ABA’s formal position on AI and how lawyers can use generative tools like ChatGPT ethically and effectively, with practical use cases and state bar insights.

Introduction
In 2023, a New York lawyer filed a court submission citing six completely fictitious cases. The culprit? ChatGPT. The case (Mata v. Avianca) made international headlines—and sent a chill through the legal industry. Was this the future of legal practice, or a cautionary tale of recklessness?
Today, the legal community is learning how to distinguish the two. AI is not a novelty or a gimmick. It’s a powerful tool already embedded in legal workflows. The question is no longer if lawyers will use AI—it’s how they can do so ethically and competently.
The American Bar Association (ABA) has now provided guidance through Formal Opinion 512 (July 2024), laying down a foundation for responsible AI use. And while it doesn’t introduce new rules, it makes one thing clear: ethical practice in the age of AI means supervision, transparency, and professional judgment remain non-negotiable.
This article breaks down the ABA’s position, previews the risks and expectations lawyers must navigate, and explores practical use cases to show how lawyers across the U.S. can begin integrating AI responsibly—today.
What the ABA Really Said: Guardrails, Not Greenlights
Formal Opinion 512 is the ABA’s first direct statement on generative AI. It acknowledges that AI tools can help lawyers “serve clients more efficiently and cost-effectively,” but urges caution: AI must be treated like any other non-lawyer assistant—useful, but subject to human oversight.
Here’s what the ABA emphasized, and what it actually means in practice:
-
Competence (Rule 1.1):
Lawyers must understand both the benefits and limitations of AI. That means staying informed about how your chosen tool works, what it can and cannot do, and how to detect when it’s wrong. -
Confidentiality (Rule 1.6):
AI tools must not expose client data. Public-facing LLMs (like ChatGPT or Google Gemini) often store inputs to retrain their models—so using them without safeguards may violate confidentiality duties. -
Supervision (Rule 5.3):
AI output must be reviewed as thoroughly as you would review a junior associate’s work. Lawyers are accountable for AI-generated materials submitted to clients, courts, or regulators. -
Communication (Rule 1.4):
If AI use affects the cost, quality, or scope of legal services, you may need to disclose it to the client—especially if they’re being billed for that work. -
Fees (Rule 1.5):
The use of AI doesn’t justify inflated billing. Lawyers must ensure fees remain reasonable and reflect the actual effort and value involved.
In short: AI is allowed—but not self-regulating. And that’s where the use cases come in.
How This Plays Out: Real-World Use Cases
To bring the ABA’s guidance into focus, let’s explore three scenarios already playing out in law firms—and the ethical questions they raise.
Use Case 1: Drafting Contracts with AI
Scenario: A lawyer uses an AI tool to generate first drafts of standard NDAs and commercial agreements.
What’s the opportunity? Faster turnaround, reduced human error, and cost-effective drafting for clients.
What the ABA expects:
- The lawyer must review the output for accuracy, legality, and suitability.
- Confidential client terms should not be entered into unsecured or public AI tools.
- Clients shouldn’t be charged as if this were traditional human drafting.
Verdict: Permissible with proper oversight. AI is a timesaver—not a judgment substitute.
Use Case 2: Responding to Discovery with AI Tools
Scenario: A litigation team uses an AI platform to flag privileged or responsive documents.
What’s the opportunity? Efficient sorting of large data volumes with better tagging consistency.
What the ABA expects:
- Lawyers must understand how the tool flags relevance—not rely blindly.
- There must be safeguards for errors and human spot-checking.
- Vendors must have confidentiality protocols in place.
Verdict: Common and ethical if supervised carefully. The tool can assist, but the lawyer must control the final output.
Use Case 3: Using ChatGPT to Draft Research Memos
Scenario: An associate uses ChatGPT to generate a first draft of a legal memo, including citations.
What’s the opportunity? A useful brainstorming tool that reduces blank-page syndrome.
What the ABA expects:
- The lawyer must independently verify all case law and conclusions.
- No sensitive facts should be disclosed to public AI platforms.
- If the AI output meaningfully shapes the client deliverable, disclosure may be appropriate.
Verdict: Ethically risky if misused. Lawyers who submit AI-written memos without review risk malpractice—and reputation loss.
The State Bars Are Watching: Why This Is Just Beginning
While the ABA sets national ethics standards, state bars regulate discipline and practice directly. And they’re starting to pay closer attention.
- California requires lawyers to maintain “technology competence” as part of their CLE.
- Florida and North Carolina have issued internal memos warning firms to assess and document their AI use.
- Texas has flagged confidentiality and privilege as primary risks for AI tools used in litigation.
What does this mean? The ABA’s opinion is just a starting point. State bars are signaling that AI-specific guidance, rules, and CLE requirements are likely on the way—especially for client-facing applications, consumer law, and solo practitioners.
Firms that get ahead of these developments—by building internal policies, training lawyers, and choosing compliant tools—will be better positioned as the regulatory landscape evolves.
AI Isn’t Just the Next Tool. It’s the Shift.
Think of how much legal practice changed with digitisation. Physical files gave way to cloud drives. Statutes and cases once locked in books became instantly searchable. That was transformational.
AI, however, isn’t just a continuation of that trend. It’s a paradigm shift.
AI can think, generate, suggest. It produces structured, human-like outputs in seconds. When used responsibly, it doesn’t replace legal thinking—it accelerates it.
- Drafts are no longer created from scratch.
- Mark searches can be done instantly.
- Risk assessments can be generated in minutes, not hours.
This isn’t about working less—it’s about working smarter.
Final Thoughts: The Ethical Responsibility Remains Yours
Whether you’re a solo practitioner using AI to save time or a global firm rolling out automated workflows, the message is the same: technology doesn’t replace ethics. Lawyers do.
The ABA’s position is measured but forward-looking. It gives lawyers the freedom to adopt AI—but only if they do so responsibly.
So take the time to understand the tools. Build internal protocols. Talk to your clients about how you work. And remember: AI is the enabler, not the shortcut.
Further Reading
- Can AI Meet Ethical Standards in Legal Work? A View from the Singapore Bar
- Coming Soon: How the UK’s Solicitors Regulation Authority Views AI in Legal Practice
This article is provided for informational purposes only and does not constitute legal advice. Businesses and individuals should consult with qualified legal counsel regarding their specific circumstances.