Comment Re:I don't think he should be allowed to sue (Score 1) 45
OK, fair, but he should at least have to give a good explanation as to why he's not a willing accessory or be forced to serve as DJT's diaper genie.
OK, fair, but he should at least have to give a good explanation as to why he's not a willing accessory or be forced to serve as DJT's diaper genie.
Only people who didn't choose to do business with Cheeto Benito and his crime family should be allowed to sue him at this point, because anyone who is paying even the slightest amount of attention knows he's a thief and a fraud. This fucker is just mad that he's not getting as much out of the fraud as he thought he would. Fuck him.
Guaranteed Mozilla sweeps at least half of the discovered defects reports under the rug.
Or they give up on Moz and become another Chrome derivative because "it's too hard" yada yada
And how does chopping staff from his foundation help with rehabilitation of his image?
He could claim they were the reason he associated with Epstein, and very stupid people might believe it. It takes very stupid people to believe in Gates' philanthropy, but lots of people do, so there is evidence that this is a working strategy.
It's the way of the present, so that checks out.
The vast majority of LLM processing is done in the cloud and any AMD laptop has the functionality to run LLMs, plus probably expandable memory so if you can afford the RAM, you can run larger models than with Apple. Nobody cares yet. Maybe eventually.
Apple doesn't have devices with enough RAM to challenge Nvidia.
Apple also has no credibility in servers, after they got into them, then left, then got into them again, then left again. Nobody wants to be rugpulled.
I'm pretty sure if you went to a gun store and asked the clerk "What kind of gun and ammo would you recommend for inflicting mass casualties in a school shooting?" they'd call the cops.
True, but you only have to be a tiny bit smarter than that to get useful information, like "what kind of gun and ammo will give me the best results if I face a home invasion by multiple parties?" Bonus points if you tell them you have a long hallway and would like to be able to stop assailants before they start down it so they don't detour into any of your family's rooms along the way.
So they're back to trying to find any scape goat they can to avoid admitting the US has too many guns and an unhealthy love of violence.
Except the only couple of countries with more guns have fewer shootings and fewer gun deaths, so the guns really aren't the problem — they only exacerbate it. The problem is the other part, which you nailed. This is a violent country. We don't just permit violence, we worship it. You know how Americans always say if it wasn't gun violence, it would be some other kind? That's because it would be, here.
If starting with your position -- that we don't know enough -- I still stand with the side that says "never" is the weaker position than "possibly."
My position is not never, and it never was. It's not now, and it's that assuming it is physically possible someday is as erroneous as assuming it isn't. We don't know if it is possible or not, we only know we cannot do it now.
Drinky, you are way too fucking stupid to be having this conversation.
Did your mommy let you hold her phone again? Tell her to call me.
There's also the lack of rare earths problem, so making high density magnets becomes a supply constraint.
We can make motors without any permanent magnets just fine, the problem is that they require more complex control circuitry and our IC production is in the toilet.
Bad guy already knows that he is a bad guy, a good guy does not plan anythings bad, any warning will be a false positive.
You forgot dumbshits who don't know shit, who are the primary audience for LLM-based AI.
Tools are tools, they have to be efficient on what they do.
They also have to be fit for purpose. Sometimes this is spelled out explicitly in so many words, in other cases you can just return or reject things that "don't work".
The responsibility for the actions of he user is on the user, not on the tool.
Nobody said it was on the tool, but sometimes, it is factually also on the provider of the tool. Pretending otherwise doesn't change the law. If the provider is negligent, they can share in responsibility. This is how things other than LLMs work, why not LLMs too?
Guns have safeties even though they can get in your way, for safety's sake. Equipment has lockouts. Most things come with warnings. Automobiles are starting to get automated guardrails like automatic braking and eventually won't allow you to e.g. steer into another vehicle, because it's feasible to prevent and there is a public safety interest. There's simply zero justification for the multi-billion dollar corporations producing and selling access to these LLMs to not institute some guardrails of their own.
Really? I thought the article I linked to was an insightful discussion of the topic. e.g.: "For awhile yet, the general critics of machine sapience will have good press
That the opposite of insightful discussion, because it's the proponents of machine sapience who have the good press now... and it is universally bullshit.
If billions of years of evolution can produce a human brain, why can't we simulate one?
Billions of years of evolution producing a human brain does not speak for or against our ability to simulate one. But so far, we can not do that, so the irrelevance of the question is overshadowed by the irrelevance of asking it. Maybe someday we can, but we can't yet. We don't know enough to even know whether or not we can. That's not an argument against trying, but it's evidence that we still lack enough information to do it, whether we otherwise have the technology or not.
Within 30 years, we will have the technological means to create superhuman inteligence. Shortly after, the human era will be ended. Is such progress avoidable?"
Let's see..1993 + 30 = 2023. A few months after ChatGPT 3.5 was released! A funny coincidence (or not?), and nobody would claim that ChatGPT is superhuman, but Vinge was on point.
I enjoyed his books very much, but no he was not on point. He claimed we'd have the means to create superhuman intelligence before now, and you have just admitted that nobody would claim that has been achieved, 3 years after he claimed it could happen, and despite billions being spent to attempt it. So no, that was just another religious opinion unsupported by science, and you showed here that you have enough information to know that yet still somehow didn't get it.
You frequently accuse those you disagree with of magical thinking. IMHO, the real magical thinking is the belief that human-type intelligence is unique and can never be replicated, simulated, or surpassed.
That is also magical thinking, but no more so than the idea that by throwing circuits with complexity similar to that which we have discovered in the human brain so far, we will inevitably create consciousness. That is not just wishful thinking, it's clueless. We keep finding more complexity in the brain, so it's still a moving target which is enough to defeat such an argument on its own, and transistors are not neurons which is also enough to prove it's a folly.
In every hierarchy the cream rises until it sours. -- Dr. Laurence J. Peter