1. Competence: If You Use AI, You Must Actually Understand AI
Ethical competence used to mean “know the law.”
Then it meant “know the law and basic technology.”
Now it means “know the law, the technology, and how not to accidentally ask an AI model to leak your client’s secrets into the universe.”
Competence today requires lawyers to understand:
- What the AI tool does
- What it doesn’t do
- How it trains, stores, or reuses input
- When the system is hallucinating like it’s auditioning for a sci-fi movie
- How to validate every output before it touches a client or court
Think of AI as a powerful but deeply unreliable intern.
Great at first drafts.
Terrible at nuance.
And will occasionally hand you a case from the “Federal Court of Never Happened.”
Competence means you must know enough to supervise it—not worship it.
2. Confidentiality: Don’t Feed Client Secrets Into a Digital Black Box
Lawyers are increasingly dropping client data into public AI tools with the confidence of someone tossing a coin into a fountain.
But here’s the ethical rule:
If you wouldn’t shout it across the lobby of opposing counsel’s office, don’t paste it into a public model.
Even many paid AI tools require careful setup to ensure:
- Data isn’t stored permanently
- Inputs aren’t used to retrain the model
- Outputs aren’t accessible to anyone else
- Logs or analytics don’t create accidental exposure
In the old days, confidentiality breaches involved someone leaving a briefcase in a taxi.
Now the breach is a lawyer who types:
“Draft a motion using the following facts: [INSERT EVERYTHING WE MUST PROTECT].”
Confidentiality didn’t change.
The risks just leveled up.
3. Supervision: AI Must Never Be the Final Reviewer
Every ethics rule about supervising human staff applies to AI—with one twist: AI doesn’t improve with experience unless it’s being retrained, and you’re definitely not retraining it with your client’s data.
That means:
- Lawyers must review AI output line by line
- No AI-generated research should go to a client unverified
- No AI-drafted motion should go to a court without source-checking
- No automated workflow should be treated as a “set and forget” system
If you give AI an inch, it will confidently write a 20-page argument for why the Constitution was actually ratified in 1978.
AI is not a substitute for judgment.
It’s a multiplier of your judgment—or your mistakes.
4. Transparency: When Do You Need to Tell the Client or Court?
Good news: ethics rules don’t demand a neon sign saying “AI HELPED WITH THIS.”
Bad news: sometimes disclosure is required.
Moments where transparency may matter:
- When AI meaningfully impacts fees (clients must understand what they’re paying for)
- When the court has specific AI disclosure rules (an increasing trend)
- When AI is used to take over tasks that would otherwise require human expertise
- When automation could influence legal advice, not just formatting or grammar
The test isn’t “Did I use AI?”
The test is “Would a reasonable client expect to know this?”
If the answer is maybe, the answer is yes.
5. Bias, Fairness, and the “Garbage In, Garbage Out” Principle
AI doesn’t wake up deciding to discriminate.
It learns from data.
And legal data reflects the real world, with all its messy inequalities.
This means lawyers must consider:
- Whether AI tools have been tested for bias
- Whether outputs skew in predictable or harmful ways
- Whether using automated decision-making in intake or triage creates unfair barriers
- Whether your workflows depend on patterns the AI shouldn’t be making decisions about at all
Bias isn’t a new ethics risk.
It’s the same old risk wearing a hoodie and running in the cloud.
6. The Real Ethical Rule: AI Is Not the Problem—Blind Trust Is
AI will continue to get better.
Faster.
More accurate.
More integrated into legal practice.
But no matter how advanced it becomes, the ethical responsibility doesn’t move.
Lawyers remain accountable for:
- What information they expose
- How they validate results
- What advice they give
- How they supervise tools
- How they explain their work to clients
AI expands capability, not accountability.
If anything, it increases the responsibility to stay vigilant.
Think of it this way:
AI gives lawyers superpowers.
Ethical rules exist to make sure no one uses those powers to accidentally fly straight into a glass wall.
The Takeaway (and a Preview of Post 3)
AI is not replacing lawyers.
It’s replacing the parts of lawyering that don’t need a law license—drafting snippets, summarizing text, organizing information, proposing ideas.
But every use of AI is an ethical decision wrapped inside a workflow decision.
And next up, in Part 3, we look at the part of the ecosystem lawyers often forget:
your vendors—the cloud tools, data processors, and AI providers who become part of your ethical responsibilities the moment you hit “Upload.”
Because in the modern law firm, outsourcing doesn’t outsource risk.
