Judges Find AI Doesn't Have Human Intelligence in Two New Court Cases (yahoo.com) 79
Within the last month two U.S> judges have effectively declared AI bots are not human, writes Los Angeles Times columnist Michael Hiltzik:
On Monday, the Supreme Court declined to take up a lawsuit in which artist and computer scientist Stephen Thaler tried to copyright an artwork that he acknowledged had been created by an AI bot of his own invention. That left in place a ruling last year by the District of Columbia Court of Appeals, which held that art created by non-humans can't be copyrighted... [Judge Patricia A. Millett] cited longstanding regulations of the Copyright Office requiring that "for a work to be copyrightable, it must owe its origin to a human being"... She rejected Thaler's argument, as had the federal trial judge who first heard the case, that the Copyright Office's insistence that the author of a work must be human was unconstitutional. The Supreme Court evidently agreed...
[Another AI-related case] involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.... Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner's lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn't be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers' notes and other similar material.) That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude's responses with his lawyers.
[Federal Judge Jed S.] Rakoff made short work of this argument. First, he ruled, the AI documents weren't communications between Heppner and his attorneys, since Claude isn't an attorney... Second, he wrote, the exchanges between Heppner and Claude weren't confidential. In its terms of use, Anthropic claims the right to collect both a user's queries and Claude's responses, use them to "train" Claude, and disclose them to others. Finally, he wasn't asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to "consult with a qualified attorney."
The columnist agrees AI-generated results shouldn't receive the same protections as human-generated material. "The AI bots are machines, and portraying them as though they're thinking creatures like artists or attorneys doesn't change that, and shouldn't."
He also seems to think their output is at best second-hand regurgitation. "Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity."
[Another AI-related case] involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.... Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner's lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn't be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers' notes and other similar material.) That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude's responses with his lawyers.
[Federal Judge Jed S.] Rakoff made short work of this argument. First, he ruled, the AI documents weren't communications between Heppner and his attorneys, since Claude isn't an attorney... Second, he wrote, the exchanges between Heppner and Claude weren't confidential. In its terms of use, Anthropic claims the right to collect both a user's queries and Claude's responses, use them to "train" Claude, and disclose them to others. Finally, he wasn't asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to "consult with a qualified attorney."
The columnist agrees AI-generated results shouldn't receive the same protections as human-generated material. "The AI bots are machines, and portraying them as though they're thinking creatures like artists or attorneys doesn't change that, and shouldn't."
He also seems to think their output is at best second-hand regurgitation. "Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity."
Re: (Score:2)
"Transliver"? What, are you against liver transplants?
Re: (Score:2)
Russia has control over its people, so they dont say much. They also pay people outside of Russia to create and spread propaganda.
These people exist everywhere. they dont care if its real or not, and likely know. But they spread it, because they are paid.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Also, it's pretty amazing that a plane that got shot down seems to have landed without a runway, landing gear, or any damage. Nice work everybody.
Let's be honest (Score:4, Funny)
Re: (Score:2, Informative)
Let me correct that slightly: All humans without severe mental dysfunctionalities have General Intelligence. It may not be a lot in most cases, but it is there. The really bizarre thing is that most humans (around 80%) chose to not use that General Intelligence, because the results can be scary and result in uncertainties. Instead they typically chose to go along with what others tell them with no fact-checking. This effect is worse (!) on important topics and less bad on unimportant ones. But even on unimp
Re: (Score:2)
General intelligence is deducing what an Orange is even though you have only been shown Apples and Pears.
Re: (Score:3)
Define "General Intelligence". As in an operational definition, i.e, a something like Turing test, that allows you to differentiate between AI and human. Oh, by the way, "human" also includes 5 year olds, ans your test must classify them as having "General Intelligence". Well?
Ask an alchemist to define gold. They fundamentally could not because they didn't understand the existence of the atomic number needed to do that. Because their model was wrong, their answers would keep changing as they did experiment after experiment. That doesn't mean, however, that they didn't know that there was something called gold and understand that they wanted it.
Re: (Score:2)
Define "General Intelligence".
Don't be a complete asshole. You can find the definition easily.
Re: (Score:1)
And that is why stupidity is actually a choice for most people and IMO they bear full responsibility for that choice as soon as they are adults.
These people were raised to not be critical thinkers. We need to hold their parents accountable along with them. I recently was told about someone whose mother is paying hundreds in child support he owes every month. If she couldn't raise him to be a decent man, at least she is paying his debt to society.
Re: (Score:2)
You know, these days I think it is 90% nature and the parents can not actually do a lot. Still better if they demonstrate what critical thinking looks like and make their kids do it. But will that assure the kids will keep doing it later? I do not think so in most cases. That said, every single additional independent thinker is an improvement, but you cannot make the parents responsible IMO. Otherwise the numbers would be a lot different in different societies. They do not seem to be.
What I could definitely
Re: (Score:2)
Re: Let's be honest (Score:2)
Re: (Score:2)
Re: Let's be honest (Score:2)
Re: (Score:1)
It's a language model.
Indeed. Though it's not clear what you're trying to imply with that statement.
Sounding good is what it does.
No, that's not what a language model does- though it certainly can be trained to do so.
It's not self-aware;
Well, not by the definition we'd use to apply that term to a human- definitely not.
If you break it into its component words and de-anthropocentrify them? The model is certainly some level of "aware" of itself, but an equally technical definition of "aware".
it's not intelligent.....to the extent we can define that word.
This one always bugs me.
It's intelligent in every way the word is used that doesn't requ
Re: (Score:2)
It's intelligent in every way the word is used that doesn't require a religious concept of human intelligence, in that the outputs it produces display intelligence- the only objective way known to measure.
In the case of diseases we clearly recognize a difference between two different diseases with identical symptoms. For example Normal Pressure Hydrocephalus and Parkinsons present extremely similarly, but one is a problem of the brain cells and needs to be treated through brain chemistry. The other needs to be treated by physically reducing the pressure in the brain. If you only look from outside the system you may never be able to tell the difference and you will give the wrong treatment.
You don't know that
Re: (Score:2)
In the case of diseases we clearly recognize a difference between two different diseases with identical symptoms. For example Normal Pressure Hydrocephalus and Parkinsons present extremely similarly, but one is a problem of the brain cells and needs to be treated through brain chemistry. The other needs to be treated by physically reducing the pressure in the brain. If you only look from outside the system you may never be able to tell the difference and you will give the wrong treatment.
You don't know that. It is absolutely possible that there is something that we just lack the tools to define the difference. When an alchemist said that there was something special about gold that made it different from Iron they were right. When they said that iron can be transformed into gold they were wrong. We are currently at that level of understanding of intelligence.
I think perhaps you misunderstood me.
I'm not saying there is no distinction internally.
I'm saying that, in the case of "intelligence", in its anthropocentric definition, no matter how we go about it, we're going to run into Descartes.
I can't prove or disprove you're intelligent. I can only infer.
The same is simply a logical truth of the system we're evaluating as well.
This isn't a tooling problem. It's a definitional problem. We can't distinguish human intelligence from a black box that emulates it, so
Re: (Score:2)
I'm saying that, in the case of "intelligence", in its anthropocentric definition, no matter how we go about it, we're going to run into Descartes. / Doubtful on the slug, and almost certainly on the cat.
I'm fairly sure that right now intelligence is one of those things that's defined by the "I know it when I see it", but only among cognitive sciences experts.
I don't think that's because intelligence can't be simply and scientifically defined. I personally guess that's just because we have not yet defined it nor discovered the "simple" basis for it.
However, with the cat, the inverse is true. An LLM has a certain amount of intelligence that no cat can ever have. / Intelligence, as it is- the definition of which being absurdly unscientific- is certainly a function of the network.
I've used the word "skill" before as compared to "intelligence". I think that a deep learning neural network is skilled, but not intelligent. What's really inter
Re: (Score:2)
I'm fairly sure that right now intelligence is one of those things that's defined by the "I know it when I see it", but only among cognitive sciences experts.
Indeed. Therein lies the problem of saying an LLM isn't intelligent.
What constitutes intelligence evolves as philosophers become less ignorant.
I don't think that's because intelligence can't be simply and scientifically defined. I personally guess that's just because we have not yet defined it nor discovered the "simple" basis for it.
What if there is no simple basis for it? Why would there be?
I've used the word "skill" before as compared to "intelligence". I think that a deep learning neural network is skilled, but not intelligent. What's really interested and inspired my thinking is problem solving, memory and "intelligent" action in unicellular creatures like slime moulds and some mobile bacteria. If a slime mould can show some level of "intelligence", then why should that intelligence not be present in individual neurons.
Well, a slime mold is distinct from a neuron, in that a slime mold is more analogous to a brain/hive of independent neurons acting according to some emergent pattern.
I'd argue a slime mold could be intelligent, even if the individual bacterium are pretty limited in their ability to be so.
We have to be careful not to be arguing just about the definition of words. In my view, you may be using a word "intelligence" which combines my word "skill" with my word "intelligence". However, what I'm saying is that I believe that, beyond mere knowledge and learned skill, is something which is much deeper than the mere mathematical automation of current LLMs. Not, in my view, deep like religious souls, but deep like a specific type of algorithm, a specific type of chemistry or a specific high temperature quantum phenomenon.
Agreed on the defi
Re: (Score:2)
What if there is no simple basis for it? Why would there be?
There doesn't have to be. If the basis exists but is complex, that would explain why, even though it exists, we haven't found it yet and it might take us hundreds of years yet to find it. We can see that intelligence exists over the animal kingdom - octopuses, birds, reptiles and mammals all demonstrate high level intelligence with problem solving, memory, spacial awareness and so on. We can also say that basic creatures like slugs are much less intelligent despite sharing the same common ancestor with us t
Re: (Score:2)
Further reply to this;
I just found an interesting story [slashdot.org] in a legal context.
In this case we get the statement
That doesn't fit with the way I use the words; my definition of "intelligence" definitely includes wisdom (which I believe is an outcome of intelligence applied to experience). I wond
Re: Let's be honest (Score:2)
Re: (Score:2)
I do agree the word "understand" is loaded, as is the word "intelligence".
I maintain, however, that if we unload those words, it's as absurd to say an LLM is incapable of intelligence as it is to say a dog is incapable of conscious thought- a viewpoint held by a majority of cognitive scientists not far in the past.
Religion and religious thinking taints our philosophical discussion of the mind at every step. In physics, there is a similar problem (related here as well)
Re: Let's be honest (Score:2)
Re: (Score:2)
Determinism is only approximate.
That is not a provable statement.
Truly random processes exist, or so we believe.
Fun story.
We believe this, because Bell's Theorem says that if humans have free will, there are no local hidden variables. This means quantum effects are effectively random.
We we remove the presupposition of free will, then Bell's Theorem does not apply.
Bit circular, isn't it?
I'm not sure what that has to do with free will, though.
Because a deterministic system cannot be free to choose at it wills.
This ties into the cognition aspect in the fact that even if determinism is only approximate, the brain is still a very classical
Re: Let's be honest (Score:2)
Actually, I suspect that things are random enough that if you somehow did manage to go back in time, you would find it completely impossible to get back to your previous present, even if you are careful to do everything the same. DNA will recombine differently, different babies will be born, different lottery winners, e
Re: (Score:2)
Personally I find a deterministic universe implausible because of how hard it is to make a computer program deterministic. It takes a lot of careful design choices to maintain that property.
Computer programs are 100% deterministic. If they fail to be so, then your CPU is broken.
Don't confused "these interactions are effing too complicated to fully grok" with "sometimes 1+1=3."
Personally I find a deterministic universe implausible because of how hard it is to make a computer program deterministic. It takes a lot of careful design choices to maintain that property.
Well, to be fair- there are different kinds of determinism.
Classical physics are deterministic in principle, but not in practice.
I.e., there's enough thermodynamic noise and chaotic interactions that even in a deterministic classical regime, you have to zoom out quite a bit for things to be deterministic.
Quantum mech
Re: Let's be honest (Score:2)
Re: (Score:2)
There's always external factors.
External factors are not non-determinism.
They're simply uncontrolled factors.
In a single program, the OS can interrupt you.
Unless you're using an OS that doesn't.
In a distributed system, the timing between components isn't synchronized.
Unless it is.
Cosmic rays can flip bits and ECC will usually fix it, but not always.
Won't happen to you in your lifetime (uncorrectable).
There's always some external thing making things less predictable than you'd like.
Absolutely. And all things can be accounted for, and corrected for. Things that aren't corrected for are not an indictment of the determinism of your programming language or computer, they're an indictment of your programming.
And that isn't meant as a slight- in 99.9% of cases, correcting for all factors simply isn't necessary
Re: Let's be honest (Score:2)
Re: (Score:2)
You can stretch the argument to absurdity, of course. But deterministic behavior does not imply "can survival all problems", it merely means "reacts to them as programmed, in a deterministic fashion".
Computers are deterministic.
Even in the case of an unrecoverable ECC error, they react deterministically. Hell, that's the entire point of ECC memory, so that the machine can react deterministically- i.e., it can know when memory has been corrupted, and react accordingly.
Re: Let's be honest (Score:2)
Re: (Score:2)
It is however an easily explained, fun thought experiment that has been used to recruit graduate students and delight cocktail party a
Re: (Score:2)
There's no ambiguity. Copyright is for humans. Period. Cory Doctorow goes into this pretty well.
https://pluralistic.net/2026/0... [pluralistic.net]
Re: (Score:2)
To err is human
Let's be real. (Score:2)
A lot of AI proponents don't have human level intelligence either...
A lot of AI proponents are proving that Good E. Nuff, AI for hire is plenty good enough to replace them.
Permanently.
The chase to achieve some kind of super-intelligence, will be the end of our species. As if any entity smarter than the collective All Of Us would ever look at us meatsacks as anything other than a cancer on this planet addicted by the Disease of Greed, as our single species forever warmongers themselves to death on a planet carved up into Yours and Mine, drawing lines in the sand with blood,
Re: (Score:2)
Good funny, but not so funny. I was thinking about a related joke involving voting rights proportional to intelligence. The obvious problem is that by most tests the current weak-arsed genAIs are already smarter than most people, so they would already have fat votes getting fatter fast.
By the way, my proposal for big fat votes is already in practice. Not just a theory. We just disguise it as free speech for corporations and rich folks. They can buy as many votes as they need from poorly educated idiots who
Well, when you look at actual facts (Score:2)
That is what you are going to find. There are just way to many easily manipulated delulus out there.
"Evidence" (Score:3)
Judges getting cluey (Score:3)
It is good to see Judges getting cluey on how generative AI works and constructing robust arguments regarding its use.
All these "creative" arguments that people are using to justify its use could easily seem reasonable to someone who is not tech savvy.
I've got a turnpike to sell you (Score:2)
If you honestly don't agree, please reach out. I have a certain Governor Joseph Turner Turnpike in my possesion, a turnkey money making venture, and I'm looking to sell fast! Act now! Supplies are limited!
Bradley is stupid (Score:3)
Loose lips sink ships or in this case, his case.
A legal definition of "intelligence"? (Score:2)
It is as useful in a discussion about what it is as the legal definition of a "person", which could be anything at any given time and has changed abruptly more than once, including in the past year in the jurisdiction in question.
Completely irrelevant to the everlasting discussion here of what it "really means".
Incidentally, it is a good time to re-read the fun Asimov story "Legal rites".
They bothered to ask Claude? (Score:2)
It's astounding they bothered asking Claude that question. They are pre-supposing Claude is qualified to answer the question in the first place. What would they have done if Claude had said yes? Claude is no more qualified than a toaster to give legal advice. If it's a question of legality, then Claude is *not* the one to ask; the court should know the answer to t
Re: They bothered to ask Claude? (Score:3)
Claude is no more *legally* qualified than anyone without a license to give advice, toaster or otherwise.
But that does not mean that it can't provide a better background on a legal case than, say, a Rudy Giuliani.
Re: (Score:3)
If it's a question of legality, then Claude is *not* the one to ask; the court should know the answer to that question without having to consult anybody but the current laws.
Of course the prosecutors and judge already knew the answer. That's why the prosecutors did it - to show that the defendant should have known (by simply asking, if he didn't already know) that an LLM is not an attorney and therefore not covered by any attorney privilege.
The prosecutors wouldn't have done the exercise if they didn't know what result they would get.
Re: (Score:2)
Re: (Score:2)
What would they have done if Claude had said yes?
Presumably, just not brought it up in court.
This is a good thing (Score:2)
While it's plausible that a real artist might use AI to make real art, much of what is currently produced is a tsunami of slop, quickly and effortlessly made by scammers to defraud streaming platforms
So its legal to copy AI generated music and movies (Score:2)
Does that mean its perfectly legal to copy AI generated music and movies? That's good news for all the AI companies.
Re: (Score:2)
LLM != AI (Score:2)
It's fancy math. It is not AI.
It is highly unlikely that LLMs will get us to AI.
The snake oil was "we just need large enough data sets!" - It's been apparent for a while now this was a blatant lie.
The LLM prophecy. (Score:2)
It's fancy math. It is not AI.
It is highly unlikely that LLMs will get us to AI.
The snake oil was "we just need large enough data sets!" - It's been apparent for a while now this was a blatant lie.
We already have a data set large enough.
We called it the “internet”.
Any other species observing that would collectively consider us a speed bump in the way of the new intergalactic highway.
Want to know what happened when George Orwell, Douglas Adams, and James Cameron walked into the bar together? They turned it into a place of worship. Then it exploded with no less than 37 camera angles of flair, confirming who wins in the end.
Perhaps the most advanced LLM on the planet is selling humans the
Right answer, wrong reasons. (Score:2)
We are all artificial intelligences. What we produce is based on our experiences. There are those that argue that AI programs have no soul or divine spark, but in all probability they are not that different to us. The difference probably lies in how our training data was curated. We have had lifetimes of slowly learning what is 'moral behaviour' from those that surround us. The AI lawyer that makes up references is not 'lying' as such; it just produces the answers it thinks you want to see.
Some Pentagon
Re: (Score:2)
One day we will have to deal with the attitude that AI is not 'like real people' and 'should have no rights'.
Sure. But what we're discussing is NOT AI. When we say "AI" whilst pointing at LLMs, we're wrong. There is no intelligence in there. Just math.
Facts in the case of Bradley Heppner (ClippyAI) (Score:2)
* A significant pretrial ruling (February 2026) addressed privilege claims:
* During his arrest, authorities seized electronic devices containing ~31 documents Heppner generated using Anthropic's public Claude AI tool.
* Heppner (after retaining counsel but acting independently) used the AI to analyze facts, outline defense strategies, and prepare reports related to the investigation.
* He shared some outputs with his attorneys.
* Judge Rakoff ruled these AI-g
So Heppner... (Score:2)
A dangerous precedent on lack of privilege (Score:3)
Destroying attorney client privilege just because some words touch AI is a dangerous concept in an age where AI touches more of our lives. So much of what we do passes through some level of 'AI' whether we ask it to or not. My entering these words is facilitated by the swiping keyboard on my phone. There is processing done to guess what words I want. Does that mean everything I write is now exposed? Sadly, yes.
If this ruling is allowed to stand as precedent, you may be left with scribbling notes on paper as the only way to keep your legal materials privileged.
Well, that or a truly dumb word processor that absolutely cannot connect to the Internet.
Or... An AI is created that is given a status that allows it to preserve attorney client privilege.
Re: (Score:2)
The dude explicitly went yabbering online. But yeah, I wouldn't be trusting Windoze not to copy everything to M$ servers nowadays either. Safe bet is get off of M$ products entirely.
Re: (Score:2)
"scribbling notes on paper"
The notes you scribble to yourself are not privileged, nor are notes you scribble to a third party, only notes you scribble to your attorneys.
I wouldn't be surprised to see a law (Score:1)
There is just too much money on the table. We aren't talking billions we're talking trillions.
One of the things I don't think people realize is that consequences of letting 1/10 of 1% of the population have all the money.
Whenever we talk about taking it away from them people get nervous because they believe that the slippery slope means we're coming for their houses and their SUVs.
Never mind all the propaganda bl
Using AI as your lawyer is an intelligence test .. (Score:2)
for you, not the AI
Not digging the title (Score:2)
From the summary, it seems like the judge is ruling that AIs are not human and not official lawyers, therefore they do not qualify as either. I don't see anything stating the judges think the AI does not have human intelligence.
Re: (Score:2)
AI seems to do well at generating exceptional CLICK BAIT for many years now...
Re: (Score:2)
AI seems to do well at generating exceptional CLICK BAIT for many years now...
Yeah, and this one got me because I wanted to know how any judge actually qualifies human intelligence, which the article had nothing to do with. Smack myself on the hand for believing the article title :p
Click bait title on a slashdot post (Score:2)
There is nothin
Thinking creatures ... (Score:2)
Is that a benefit? (Score:2)
Same With Digital Photography (Score:1)
They are not created by the photographer -- they a