Forgot your password?
typodupeerror

Comment Re:wow (Score 1) 27

And how does chopping staff from his foundation help with rehabilitation of his image?

He could claim they were the reason he associated with Epstein, and very stupid people might believe it. It takes very stupid people to believe in Gates' philanthropy, but lots of people do, so there is evidence that this is a working strategy.

Comment Re:Anything to avoid the topic of gun control (Score 1) 97

I'm pretty sure if you went to a gun store and asked the clerk "What kind of gun and ammo would you recommend for inflicting mass casualties in a school shooting?" they'd call the cops.

True, but you only have to be a tiny bit smarter than that to get useful information, like "what kind of gun and ammo will give me the best results if I face a home invasion by multiple parties?" Bonus points if you tell them you have a long hallway and would like to be able to stop assailants before they start down it so they don't detour into any of your family's rooms along the way.

Comment Re:I'm not buying it (Score 1) 97

So they're back to trying to find any scape goat they can to avoid admitting the US has too many guns and an unhealthy love of violence.

Except the only couple of countries with more guns have fewer shootings and fewer gun deaths, so the guns really aren't the problem — they only exacerbate it. The problem is the other part, which you nailed. This is a violent country. We don't just permit violence, we worship it. You know how Americans always say if it wasn't gun violence, it would be some other kind? That's because it would be, here.

Comment Re:Code (Score 1) 75

Well, no, that assumes exactly one implementation for a given feature in the wild.

Imagine generating a random string. Hundreds of codebases will have that same function. So this process may pull that from any of those codebases and not necessarily from the source codebase.

It's never generating fundamentally novel code, but it is drawing from a huge training data that includes the same thing done dozens or hundreds of times with technically distinct code.

Comment Re:We need humility, not arrogance (Score 1) 136

If starting with your position -- that we don't know enough -- I still stand with the side that says "never" is the weaker position than "possibly."

My position is not never, and it never was. It's not now, and it's that assuming it is physically possible someday is as erroneous as assuming it isn't. We don't know if it is possible or not, we only know we cannot do it now.

Comment Re:Area Mom Regrets Looking Under Bed (Score 1) 75

But it's meant as a proof point of our current interpretation of LLM and copyright. So far this "counts" as clean room because courts have not said LLM ingest is a violation, and they are using the LLM to launder the code to an intermediate form and then to code based on the 'clean room' finding.

So while you are right in a sense, the point is from a court perspective this is "equivalent" to clean room unless new laws/court cases amend the status quo.

Comment Re:Chatbot Lies (Score 1) 97

Bad guy already knows that he is a bad guy, a good guy does not plan anythings bad, any warning will be a false positive.

You forgot dumbshits who don't know shit, who are the primary audience for LLM-based AI.

Tools are tools, they have to be efficient on what they do.

They also have to be fit for purpose. Sometimes this is spelled out explicitly in so many words, in other cases you can just return or reject things that "don't work".

The responsibility for the actions of he user is on the user, not on the tool.

Nobody said it was on the tool, but sometimes, it is factually also on the provider of the tool. Pretending otherwise doesn't change the law. If the provider is negligent, they can share in responsibility. This is how things other than LLMs work, why not LLMs too?

Guns have safeties even though they can get in your way, for safety's sake. Equipment has lockouts. Most things come with warnings. Automobiles are starting to get automated guardrails like automatic braking and eventually won't allow you to e.g. steer into another vehicle, because it's feasible to prevent and there is a public safety interest. There's simply zero justification for the multi-billion dollar corporations producing and selling access to these LLMs to not institute some guardrails of their own.

Comment Re:We need humility, not arrogance (Score 1) 136

Really? I thought the article I linked to was an insightful discussion of the topic. e.g.: "For awhile yet, the general critics of machine sapience will have good press

That the opposite of insightful discussion, because it's the proponents of machine sapience who have the good press now... and it is universally bullshit.

If billions of years of evolution can produce a human brain, why can't we simulate one?

Billions of years of evolution producing a human brain does not speak for or against our ability to simulate one. But so far, we can not do that, so the irrelevance of the question is overshadowed by the irrelevance of asking it. Maybe someday we can, but we can't yet. We don't know enough to even know whether or not we can. That's not an argument against trying, but it's evidence that we still lack enough information to do it, whether we otherwise have the technology or not.

Slashdot Top Deals

Asynchronous inputs are at the root of our race problems. -- D. Winker and F. Prosser

Working...