Competence, two years in: the quiet radicalism of Rule 1.1
What does it mean to be competent at a tool you cannot audit? A careful walk through the part of ABA Formal Opinion 512 most of the commentary has missed.
The opinion on the desk of the Sixth Circuit panel, the morning of March 13, 2026, was an eleven-page appellate brief filed by two Tennessee attorneys. Twenty-one of the case citations in it did not exist. Not miscited. Not out of date. Did not exist. Fabricated, with docket numbers and pinpoint page references and plausible judicial voice and the quiet authority of footnotes.
The panel’s reaction was a sentence the legal press picked up immediately — “a fake opinion is not existing law” — and a joint-and-several sanction that, by the time the appellees’ fees and double costs were added, landed at $116,315.09. The lawyers had signed the brief. Under Rule 11, that was enough.
You could read Whiting v. City of Athens as a story about sanctions, which is how most of the coverage framed it — and which is Letter V. You could read it as a story about tribunal-candor, which is Rule 3.3, same letter. I read it as a story about Rule 1.1, and the part of ABA Formal Opinion 512 that almost every piece of commentary has skipped past on the way to the louder rules.
I’m Judy. Let me walk you through it.
The ground has shifted at least nine times since 512 dropped. The opinion itself hasn’t. Click a row to expand.
§ 1 What 512 asked of Rule 1.1
The opinion runs fifteen pages and names six Model Rules. It opens on Rule 1.1 — competence — because that was the easiest of the six to get wrong. Rule 1.1 was already amended, back in 2012, to include the famous Comment 8 language about the benefits and risks of relevant technology. What 512 did was take that comment, read it against a class of software the drafters had never contemplated, and decline to issue a single carve-out.
Specifically: the committee said a lawyer using generative AI must have a reasonable understanding of the tool’s capabilities and limitations. Not an engineering understanding. Not a white-paper understanding. Reasonable. Enough to know what the tool does, what it cannot do, and what particular risks attend to its use in the matter in front of you.
The first time I read that sentence, it looked generous. The second time I read it — with the hallucination-benchmark literature open on the other side of the screen — it looked less so.
Reasonable understanding of a tool you cannot audit is not the same kind of competence as reasonable understanding of the Rules of Evidence.
§ 2 The audit problem
Stanford’s RegLab published a paper in May 2024 — Magesh, Surani, and Ho — benchmarking the leading legal-AI research tools against a set of controlled queries. The headline number is the one the trade press keeps repeating: 17 to 33 percent of tested outputs contained material hallucinations, even on the retrieval-augmented products that explicitly advertise themselves as hallucination-free.
The paper is careful. The hallucination rate is not uniform across tools, matter types, or jurisdictions. A well-configured research tool with a strong retrieval layer does better than the consumer chat product your nephew showed you last Thanksgiving. But the interesting number is not the worst case. The interesting number is the best case: roughly one in six, on a product that charges what a Westlaw seat costs.
I want to sit with that for a moment, because it is where the Rule 1.1 question actually lives. The human-paralegal version of this error looks like a paralegal who sometimes fabricates case citations but is otherwise excellent at Blue Book citation format. You would notice that paralegal by the second week. You would have her fired by the third. You would not keep her on the billable payroll at any rate of fabrication, not because her citations are wrong but because you cannot build a supervisory relationship with someone who is sometimes not telling you the truth.
A tool that fabricates between 17 and 33 percent of the time, depending on the matter, is in that category. And the first thing Rule 1.1 asks of you — once you have decided to keep it on the payroll — is that you know this. Not read a LinkedIn post about it. Know this, in the way a careful lawyer knows a rule.
§ 3 The solo version of the problem
At a firm of forty, competence is a supervisory structure. A first-year runs the draft, a third-year reviews the citations, a partner reads for judgment, and the brief goes out under three pairs of eyes. The Rule 1.1 obligation sits across that chain, and any one of those readers catches the fabricated citation.
At a firm of one, the chain collapses. You are the first-year, the third-year, and the partner. You are also the paralegal, the billing clerk, and the person who remembers where the client parked on Tuesday. The ordinary solo day does not include a second reader. It does not include a fourth coffee. It includes the matter in front of you and the one you have not opened yet.
Rule 1.1, as 512 reads it, asks one person to hold the entire supervisory chain against the tool. That is a real obligation. It is also not, on a Tuesday afternoon, something a careful lawyer meets by good intentions. The two attorneys in Whiting were not stupid, and I do not think they were cavalier. I think they were busy, and the tool returned a confident-sounding draft, and they signed it, because the supervisory chain that would have caught the error had already been outsourced to the tool. There was nobody else in the office.
The first thing the competence duty asks of a solo, in a post-AI practice, is to rebuild the second pair of eyes.
§ 4 What reasonable understanding looks like
Reasonable understanding, for the solo, is four questions you can answer about any tool you are about to put a client matter into. I am going to list them without much commentary, because commentary dulls the edge.
- Does it retain your inputs? The answer is either yes, no, or yes but for a narrow window — each of which means something different for Letter II.
- Does it use your inputs to train the model? This is the self-learning question. It has a different answer for consumer ChatGPT, enterprise ChatGPT, Claude via API, and the same model behind a vendor promising zero data retention. The right answer for a client matter is no.
- What is its documented error rate? If the vendor cannot tell you or will not tell you, that is the answer. Adjust accordingly.
- What is your verification step? Every output is either verified before it reaches the client or it is an output you have decided not to verify. Neither is wrong. But the choice is yours and has to be made per matter.
No 512 consent needed.
Using an AI for generic legal research, drafting a blog post, or summarizing public case law triggers no consent obligation under 512. Document the configuration anyway.
Informed consent required.
Conversation, not signature. Name the tool, the data, the risk, the alternative, and your best judgment. Record the conversation in the matter file.
Follow the state rule.
California’s practical guidance is the current high-watermark: it limits input of confidential client information into self-learning tools regardless of consent. The configuration is the control, not the conversation.
No 512 consent obligation —
— but document the tool, the configuration, and the vendor’s data-handling commitment. You are still supervising under Rule 5.3.
Print this. Put it on the wall. Every row on the wall is a sanction you didn’t get.
§ 5 The client-facing sentence
Competence is also something the client has to be able to feel. You do not have to read them the Stanford paper. You do have to say a version of this, in a voice that sounds like yours, in the first or second meeting:
“I use an AI tool the way I used to use a junior associate. It produces a first draft. I review and edit before anything leaves my office. I want you to know three things about that. First, I use a version of the tool that keeps your information on my machine. Second, I verify every citation and every factual claim in anything I sign. Third, I don’t bill you for time the tool saved me.”
That is Rule 1.1 in a sentence, said aloud, in front of the person whose matter is the reason any of this matters. You’ll hear more about that sentence in Letter II, when we get to consent, and in Letter III, when we get to the billing line at the end. Today the point is smaller: a lawyer who can say that sentence means it has the competence 512 asks for. A lawyer who cannot does not.
§ 6 What the page actually asks
Two years after 512 dropped, most of the writing on it has reached for the loudest rules — the confidentiality panic, the billing debate, the sanctions data. Those are all real. They are not where the opinion starts.
The opinion starts on Rule 1.1 because the committee understood something the commentary has not quite caught up with. A lawyer who has rebuilt the second pair of eyes — who knows what the tool does, who verifies what the tool produces, who has worked out what to say to the client about the tool before the tool enters the matter — has, by that fact alone, solved a large fraction of the other five rules. A lawyer who has not done that work is going to find the other five rules ambushing her in sequence.
That is the quiet radicalism of the first page of the opinion. The rest of the page, and the rest of the letters, follow from it.
I’ll be back Tuesday.
— Judy
Primary sources
- ABA Formal Opinion 512 (July 29, 2024) — the full fifteen pages. Competence is the opinion’s first section.
- Magesh, Surani & Ho — Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools (Stanford HAI / RegLab, May 2024) — the 17–33 percent benchmark.
- Whiting v. City of Athens, Tennessee, 6th Cir. (Mar. 13, 2026) — the “fake opinion is not existing law” framing.
- Damien Charlotin — AI Hallucination Cases Database — running empirical record, 1,330 cases as of April 2026.