Forgot your password?
typodupeerror
The Courts AI

Penalties Stack Up As AI Spreads Through the Legal System 51

Tony Isaac shares a report from NPR: When it comes to using AI, it seems some lawyers just can't help themselves. Last year saw a rapid increase in court sanctions against attorneys for filing briefs containing errors generated by artificial intelligence tools. The most prominent case was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious, AI-generated citations. But as a cautionary tale, it doesn't seem to have had much effect. The numbers started taking off last year, and the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts. "I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. "Whatever the generative AI tool gives you -- as in, 'Look at these cases' -- you, under the rules of professional conduct, you have to read those cases. You have to read the cases to make sure what you are citing is accurate."

"I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don't," she says. "That's what I think the future is."
This discussion has been archived. No new comments can be posted.

Penalties Stack Up As AI Spreads Through the Legal System

Comments Filter:
  • I suggest:

    First offence: Have to watch CSPAN for 5 hours a day, for a week, without sleeping through it - evidence to be provided in court

    Second offence: Have to sing Miley Cyrus songs and Baby Shark on TikTok - sober

    Third offence: License to practice and all memberships of country clubs and golf courses revoked

    • Re: (Score:3, Funny)

      by fahrbot-bot ( 874524 )
      From TFS:

      "Whatever the generative AI tool gives you ... you have to read those cases.
      You have to read the cases to make sure what you are citing is accurate."

      Since the issue seems to be attorneys not reviewing/checking their sources, I suggest the punishment be along those lines. Perhaps a week of extensively fact-checking *everything* the President says, for each offense.

    • by drnb ( 2434720 ) on Friday April 03, 2026 @03:13PM (#66075894)

      We need to increase the penalties

      I think we first need to be sure that the lawyers actually paid the penalty themselves, and not passed it on to the client as some sort of expense. And if not passed on to the client, ensure that the penalty is not a business expense they can deduct.

      • Predict: There will be venture capital firms using some whistleblower type law to file legal malpractice complaints on an industry wide scale in a year or two.

        1. Get all the court filings in a state or federal court for the last 2 years
        2. Scan via AI for all legal precedent citations
        3. Find legal citations which do not exist
        4. File complaints with the state bar (lawyer licensing agency), the court itself, state/federal judicial misconduct agencies
        5. Find the opposing legal parties and class action sue the l

  • by gurps_npc ( 621217 ) on Friday April 03, 2026 @01:13PM (#66075728) Homepage

    Because that would explain why they still have issues.

    I can easily see a lawyer order their paralegals to fact check their reports, but when they get AI they fire their paralegals and just think the AI can handle those duties.

    • Because that would explain why they still have issues. I can easily see a lawyer order their paralegals to fact check their reports, but when they get AI they fire their paralegals and just think the AI can handle those duties.

      1) Why do you assume paralegals had anything to do with it? If a lawyer is using AI to write their briefs without checking anything before submitting them to courts, it is most likely they do not have paralegals to do that or they were never going to give that work to a paralegal.

      2) No matter how the works is produced, the lawyer who submits the filing under their own name is responsible for making sure the filing is correct. In one of the AI cases I remember, the lawyer who submitted the filing was sancti

      • 1) If your lawyer does not have a paralegal, you should fire them. Law is a very paperwork intensive industry. Now a days you can get away with 1 paralegal serving multiple lawyers, but nobody and I mean nobody goes without a paralegal. Smaller shops may have only one person doing the work of a receptionist and paralegal and other things, but they will always have at least one person.

        It is inappropriate for a lawyer to have the paralegal check his work, but if you are not going to check it at all, that

        • If your lawyer does not have a paralegal, you should fire them.

          1) So your advice is to only hire lawyers from large law firms? Lawyers with a solo practice may not have a paralegal. 2) A

          Law is a very paperwork intensive industry. Now a days you can get away with 1 paralegal serving multiple lawyers, but nobody and I mean nobody goes without a paralegal.

          Again small one lawyer firms exist so what you are saying is untrue. They should use paralegals to lighten their work load, but that does not mean they must always use a paralegal.

          Smaller shops may have only one person doing the work of a receptionist and paralegal and other things, but they will always have at least one person.

          Again I know of one person firms that do not always use a paralegal.

          It is inappropriate for a lawyer to have the paralegal check his work, but if you are not going to check it at all, that is incredibly stupid.

          You wrote above: "I can easily see a lawyer order their paralegals to fact check their reports" You just contradicted what you wrote.

          Stupid as in how the hell did you get into law school and how did they not kick you out.

          You lite

    • But what I think is really going on is dodgy attorneys have been putting fake citations in their briefings for centuries and we are hearing about it because they're using AI to generate the fake citations instead of just making them up on the spot like they used to. I suspect judges and defense attorneys are scrutinizing citations more too because they've seen the stories about AI.

      But I do think it would be naive to believe that given how skeezy attorneys can be that they haven't been feeling their port
      • by tlhIngan ( 30335 )

        But what I think is really going on is dodgy attorneys have been putting fake citations in their briefings for centuries and we are hearing about it because they're using AI to generate the fake citations instead of just making them up on the spot like they used to. I suspect judges and defense attorneys are scrutinizing citations more too because they've seen the stories about AI.

        But I do think it would be naive to believe that given how skeezy attorneys can be that they haven't been feeling their portfili

  • But, but, I am a lawyer, how can you doubt me!
  • Lawyers are some of the most overworked people on the planet. Not only that, but the work they do requires a lot of high-level thinking and processing for long stretches of time. It's exhausting work.

    So along comes AI, which can turn hours of work into minutes, saving them a lot of time and work (at least up front). Of course they'll take a chance at it, especially when it lets them get eight hours of sleep a few more times a week. Besides, with better odds than a coin flip, the case will probably settle anyways, and what they write will never see the light of day.

    Besides, it's very easy to skim through what AI generates and feel convinced that it's good enough. Only if one were to really scrutinize the work would one discover how terrible it is, but why bother doing all that extra evaluation...wasn't AI supposed to save you time?

    • by UnknowingFool ( 672806 ) on Friday April 03, 2026 @02:18PM (#66075826)

      Only if one were to really scrutinize the work would one discover how terrible it is, but why bother doing all that extra evaluation...wasn't AI supposed to save you time?

      Unfortunately for lawyers, they are in a field where their opposition checks their work. From what I understand the most difficult part is finding the relevant case or law. If a lawyer cites [court case] or [law], the opposition can quickly check it. It is almost an NP problem.

    • by MobyDisk ( 75490 ) on Friday April 03, 2026 @02:20PM (#66075834) Homepage

      which can turn hours of work into minutes, saving them a lot of time and work

      1. Raw work: 8 hours
      2. Work with unchecked AI: 8 minutes
      3. Work with check AI: 16 minutes

      I don't get why people choose option 2 over option 3.

      Lawyers are some of the most overworked people on the planet.

      They stocker making $15/hour needs to work extra hours to survive. Why does the lawyer making $500/hour overwork?

      • Isn't that the problem? 200$ an hour? Do due diligence, charge for 10 hours of work, 2000$. Do 10 cases with Ai in 10 hours, 20000$. A few fake references slip through during review. Some get noticed, fine for case 8: -3000$. 17000$ earned.
        Go for the red pill, the blue one is for pansies, some would state.
        • That's an extremely short-term analysis.

          Your red-pilled lawyer will soon have a reputation. They'll find their client pool shrinking, judges not giving them the benefit of the doubt, and other lawyers not referring work to them.

          They'd better do a lot of slop cases quickly and hope the money lasts, because that strategy is going to tank their practice faster than they graduated law school.

        • by azander ( 786903 )

          That $200/hour? That is the price they charge.

          Out of that comes about 10% needed to cover further education for themselves and any staff, as required by the bar and in some places law. Another 20% is taxes in one form or another. Another 10-15% is to cover office upkeep, excluding wages for staff. And finally between 40 and 50% is Staff Wages and benefits. That leaves a very small amount for the Lawyer themselves. The smaller the office the higher the percentages taken from the overall income. This is

          • by MobyDisk ( 75490 )

            This is a valid retort. But let us not think that lawyers are struggling: once they get to be a "partner" in a firm they are likely making $1 million/year. And the entire context of the discussion is that they aren't relying on staff like they used to. Back in 1980, a lawyer had staff members who ran down to the court house to get documents, bring them back, photocopy them, staple them, file them, make phone calls. Now all of that is 100% automated, plus now they have AI.

            I'm not sure the legal overhead

    • by alvinrod ( 889928 ) on Friday April 03, 2026 @02:51PM (#66075870)
      Anyone charging hundreds of dollars per hour for their work had better damned well be doing it. Not only should they be sanctioned by the courts, but they should face criminal fraud charges. If the courts want to put a stop to this they had better get serious now and stop handing out slaps on the wrist. Multiply the sanctions by an order of magnitude and give opposing counsel a 30% finder's fee to encourage additional vigilance and it'll quickly stop.
    • Not really.

      The vast majority of attorney's get paid an hourly rate. Efficiency is not something they are interested in because it does not change their income and most of their work is not so time sensitive as to allow for rate increases if they can work faster. Why would anyone choose to do more work (in the same # of hours) for the same pay?

      Reducing the firms overhead is where they can make more money - fewer associates, paralegals and admins - but if AI can't do the work better than the humans then it is

  • by gweihir ( 88907 ) on Friday April 03, 2026 @01:25PM (#66075748)

    Essentially what you expect from the stereotypical US lawyer. Yes, I am aware not all are like that, but it seems a significant part is exactly like this.

    As to sanctions, I would think 3 strikes and then they cease to be layers, permanently. And if it goes wrong, full personal liability, not covered by insurance. With the fees these people ask, what they are doing is essentially fraud.

    • Essentially what you expect from the stereotypical US lawyer.

      He says, about an article which clearly notes a significant number of these cases happen outside the USA.

      • by gweihir ( 88907 )

        Soo, 800/1200 in the US, while 250M/8B of the population.

        I think you need to look up what "significant" means...

    • Especially, stupidity.

      Checking sources is something that AI is especially good at: finding a specific referenced case, and summarizing it to see if it relates to the citation. If lawyers aren't doing that basic kind of check, my question is, why the heck not?

      • by gweihir ( 88907 )

        Indeed. "Better search" (I include "summarization" in that) is basically the only thing LLMs are really good at (minus hallucinations, hence the need to check).

  • Ethics (Score:5, Interesting)

    by eriks ( 31863 ) on Friday April 03, 2026 @01:51PM (#66075784)

    "I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don't,"

    There are three kinds of people in the world:

    1. Those who strive to behave ethically.
    2. Those who don't give a damn about ethics at all and make no bones about it.
    3. Those who pretend to behave ethically.

    People who want to "do the right thing" aren't a problem. They sometimes make mistakes, but try to correct them. I think this is most people, like more than 80%.

    People who don't give a damn aren't really a problem either, since in a world populated by mostly good people, they'll ultimately be shamed and marginalized or end up in jail.

    People who can successfully project the illusion of behaving ethically when they have no intention in doing so are a HUGE problem. While there aren't a lot of them, they're highly concentrated in positions of power and hold most of the world's wealth.

    Maybe in the field of law, you can sort of cancel out the pretenders over time, since everything is (ostensibly) reviewed, so maybe "AI" will help the unabashedly unethical lawyers to self-destruct, but everywhere else, the problem remains, and "AI" is mostly going to make them worse.

    • 4. Those who are unsure on the concept [youtu.be] and could use some guidance
      • by eriks ( 31863 )

        Yeah, that's a definitely a thing, though for the most part, that's willful ignorance, since the foundation of ethics (an innate sense of fairness) is rooted in biology [citations widely available]. And "do onto others" isn't quantum physics.

        Though you're right there is a fourth group that legitimately "doesn't understand what ethics even is", though I think that group is vanishingly small, and most people with an IQ above 50 that couldn't tell you what "ethics" is know what the "golden rule" is.

        "Fake Kevi

    • by kbahey ( 102895 )

      2. Those who don't give a damn about ethics at all and make no bones about it.
      ...
      People who don't give a damn aren't really a problem either, since in a world populated by mostly good people, they'll ultimately be shamed and marginalized or end up in jail.

      Or, they get elected president, twice ... destroy the economy, country's reputation, and start a war they can't end.

      • by eriks ( 31863 )

        True enough, he's definitely in group #2. No question about that. Though he and people like him are kind of a special (pathological) case, straddling the line between #2 and #3, in that they seem to believe their own current bullshit, even when it directly contradicts their previous bullshit. Not that they actually care about ethical behavior at all, but I think at least some of the time, they're deluding themselves into thinking that they're "doing the right thing" -- and they (sort of) are, but only fo

  • by SmaryJerry ( 2759091 ) on Friday April 03, 2026 @02:01PM (#66075794)
    Literally everything I've tried to use AI for has had mistakes, even simple requests. Occasionally it can really amaze you with something but in general its best quality is as a google replacement or allowing someone who doesn't program to program something very simple that doesn't matter if it breaks.
    • by EvilSS ( 557649 )
      Funny enough, this is one place where the fix is easy but probably not cheap. They just need to build guardrails that automatically checks any case law references against something like LexisNexis and feeds back to the AI if it makes something up. Case law is extremely well documented and fairly structured in how it's indexed. You wouldn't even need to use AI for the lookup, a competent traditional search algo would work. Of course that's going to be expensive since it will require access to the case law da
      • If there was a database of all case law, this would seem like a perfect use for AI. You tell AI, hey, find me some case law that supports my current problem. It shoots back a handful of relevant cases, you read them and away you go.

        I hate to admit, but I do like AI for search results and also a link or two at the bottom to show me where they sourced their information. While it doesn't always work, a good portion of the time, it does.

    • I don't know what AI system(s) you are using, or how long it has been since you tried, but Claude is extremely impressive and while it isn't perfect it is far more effective than google searching. I recently used it to troubleshoot a guitar problem I had with dead notes when fretting on the first fret and it was quite competent. I was able to interact with it and gain a solid understanding of what was causing the issue, including the physics behind it. Then it helped me find a great repair shop to replace
    • I see this a lot on /. and I think it demonstrates a lack of understanding on AI. We have achieved a massive breakthrough in natural language processing - yet many think this immediately applies to the very structured and deterministic language of code. I have used AI for years and am (was) a programmer. In those years I’ve literally asked AI once for some code, on how to analyze some CSV data for trends, and sure it was half ass decent what it produced.

      NLP (English or otherwise) is fuzzy and not t
  • For allegedly smart people like lawyers, I'm surprised they can't figure out to add to every single prompt "but make sure you're referencing one single case that actually happened and add your cited source next to it so I can verify it." Not that complicated!
    • but make sure you're referencing one single case that actually happened and add your cited source next to it so I can verify it.

      1) I doubt AI is smart enough to do that. 2) Verifying the case exists is trivially easy as the citation explicitly tells everyone how to find the case. The difficulty is in locating cases that are relevant to the legal issue.

    • by EvilSS ( 557649 )
      You joke but following up with "Are you sure?" will trigger a lot of models to go verify their output with external sources like web searches.
  • They shouldn't have let Shakespeare's 'first kill the lawyers' be in their reading material ;)

As a computer, I find your faith in technology amusing.

Working...