Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:At least they are consistent (Score 1) 57

There isn't really a human analog.
In previous memorization lawsuits, it has required extremely suggesting priming within the context to produce the output, to the point of being eyerolling.

There's no case in human copyright law where you have to judge the infringing of "someone fed me 60% of a blurb of text, and I was able to complete the rest of it."

Comment Re:What about top speed? (Score 1) 53

Also, the only realistic way to create a true "unintended acceleration" without pedal misapplication is something getting stuck in the pedal or the pedal getting stuck down, which is not actually a subtle thing (again, these things have happened, but they're dwarfed by how often people hit the wrong pedal). Just sensor readings alone don't cut it. As a general rule, pedals have multiple sensors reading the pedal position (typically 2-3). They have to agree with each other, or the target acceleration is set to zero. A sensor failure doesn't cut it. Also, Hall-effect sensors are highly reliable.

Oh, and there's one more "failure mechanism" which should be mentioned, which is: creep. Some EVs are set to creep or have creep modes, to mimic how an ICE vehicle creeps forward when one lifts their foot off the brakes. If someone forgets they have this on, it can lead to "unintended acceleration" reports. There have been cases where for example the driver gets in an accident, but not intense enough to trigger the accident sensors, and the car keeps "trying to drive" after the accident (aka, creep is engaged). People really should not engage creep mode, IMHO - the fact that ICEs creep forward is a bug, not a feature.

Comment Re:What about top speed? (Score 3, Informative) 53

All the person in these "runaways" had to do was lift their foot off the accelerator. Or even leave their foot on the accelerator and just press the brakes, as the brakes can overpower the motor (think of how fast you accelerate when you slam on the pedal at highway speeds vs. how fast you slow down when you slam on the brakes).

Regulatory agencies the world over are constantly getting reports of "runaway unintended acceleration". Nearly every time they investigate, the person mixed up the pedal and the brake. When the car starts accelerating, in their panic they push said "brake" (actually the pedal) harder, and keep pushing it to the floor trying to stop the car. In their panic, people almost never reevaluate whether they're actually pushing the right pedal. It's particularly common among the elderly and the inebriated, and represents 16 thousand crashes per year in the US alone.

If your car starts accelerating when you're "braking", get out of your panic, lift your foot up, then make sure you *actually* put it on the brake, and you'll be fine.

Comment Re:amazing (Score 1) 138

Rational and level-headed view.
That the Chinese are making some good cars, and that the industry is built on a bedrock of Government money can both be true at the same time.
If the rug is pulled from underneath them- the work they'll have done will still have been done. The bubble will pop, and then they'll produce reasonable amounts of good cars.

Bubbles can suck for a lot of reasons, but it's not like 2008 made people stop needing houses, or the .com bust caused the internet to die.

Comment Re:At least they are consistent (Score 1) 57

Training on copyrighted material is legal.

I do, however, agree that there must be some kind of legal requirement for a bona fide effort to 1) prevent memorization, and 2) DMCA-like regime that allows copyright holders to apply output filters for public services.

However- you still haven't addressed the fundamental problem.
Where do we draw the line?
How much of the output was given in the input? What % of identical output do we call infringement?
These aren't simple questions.
The limits are simple- obviously, if someone says, "tell me the lyrics for..." and it produce them 100% verbatim- that's wrong.
However, what if they're 60% wrong? Or what if the user says, "what song lyrics go like this: [50% of the lyrics here]"

This is the problem. Not the limits. Rational people aren't arguing about the limits.

Comment Re:So is it... (Score 1) 57

1) 1850-1900 is not "The Little Ice Age"
2) The Little Ice Age was not global, while you're talking about global climate reconstructions. The planet as a whole was not cold in the Little Ice Age.
3) You're talking about the basis of a particular climate target, not what the science is built on.
4) The mid 1800s is around when we started getting reasonably good regular quasi-global ground climate measurements, hence it's nice for establishing a target. That's why HADCRUT, which is based on historic measurements, starts in 1850. The first version of HADCRUT started in 1881 when the data was even better, but as more old data was recovered and digitized, it was extended to 1850. You can go further back, but you not only lose reading quality, but also are more confised to mainly regional records (Europe).
5) 1850-1900 was not a global cold period.

There's not some sort of conspiracy theory. The target is based on relative to when we have actual comparative data, and variations in modern preindustrial levels are a few tenths of a degree, not "several degrees" as per climate targets.

Comment Re:At least they are consistent (Score 1) 57

Statistically speaking, the room full of monkeys, given an infinite timeline, might eventually type Shakespeare.

True, and irrelevant.

A LLM-AI simply regurgitates what it found in its city-sized database.

Incorrect.

It is not intelligent

Incorrect.
intelligence (n)
the ability to acquire and apply knowledge and skills.

it does not make decisions on its own

Demonstrably and idiotically incorrect.
You're arguing that the sky is neon black- what angle are you going for?

it's not working to solve world hunger without any human input of any kind...

This is just completely wrong.
Though I certainly wouldn't test it- without alignment training, I'd say it's about equally as likely to move to end human hunger via genocide.

it's not writing the next great American novel when it's not busy regurgitating 'how to solve long division'.

They're quite capable of writing a novel. Also composing music. Designing a building.

It responds to queries however it's programmed to, that's it.

Incorrect.
It is not programmed to do shit.
It responds to a set of tokens by turning them into N-dimensional vectors, and running it through a network that was randomly initialized and tuned to turn those into a certain set of output probabilities via gradient descent. The solutions it comes up with for doing so are literally bounded only by the standard Turing limits- limits a human has not ever been demonstrated not to have.

It's a simulation of a conversation, even when it "hallucinates", that's it.

I really think you might just be an idiot.
The only way you can define "conversation" to make what the LLM does a simulation, and what you do real, is by specifically defining "conversation" in anthropocentric terms- i.e., saying it's only a conversation if a human does it.
That's circular and idiotic. Are you an idiot?

So, being that it's a computer, it's legal for it to cough up entire sections of text from The Stand and not pay royalties to Stephen King?

Is it legal for a set of monkeys to do so?

Right, you don't have a right to perform a copyrighted work, but because it's a computer/cell phone mostly used in the privacy of your home, it's legal for it to spew copyrighted info without ever paying for it.

In short, yes.
Of course- you don't really use it in the privacy of your own home. I can tell from your general level of ignorance on the topic that you aren't the kind of person that can afford to. Sure, you might be able to run a lobotomized model on your 3060, or some shit, but you're just playing.

"It didn't copy material": Your words...

Correct.

"To be clear: Reproducing exact texts is a training failure. It's a mistake." That implies copying...

Incorrect. It means the embedding vectors were trained to the point of single descent, without going to double descent, where generalization happens.
Are you seriously trying to argue that anything that can produce a set of text has copied it?
That not only doesn't meet the legal definition, it doesn't even meet fucking logical muster.

if it can reproduce exact texts, that means it has a copy of a book in the database someplace.

That is absurd, and incorrect.
If I can produce the digits of pi, does that mean I have an exact copy of it some place?

Same thing with the code for a text box...

Wrong.

it was trained (read: crawled) the web

The problem here, is that you don't actually know what the word "trained" means in this context. You're using it how you understand it, but you have the understanding of a 5 year old, and it's leading you to misuse it.

most likely including GitHub... it doesn't create new code, it spews what it was trained on.

Demonstrably false.

You're out of your depth.

Comment Re:So is it... (Score 2) 57

When they say "pre-industrial levels", when do you think they mean? The 19th century (even though the industrial revolution was well underway), usually 1890 specifically.

What year is used depends entirely on the study. Some start at the advent of satellite measurements, some at the advent of modern ground-based measurements, some with the era of semi-reliable ground-based measurements, some incorporate further back with more fragmentary measurements, and others use proxies - some recent proxies from 200, 300, 400 etc years ago, others thousands, tens of thousands, hundreds, millions of years ago or more. There is no single timeframe that is examined. Numerous studies evaluate each different source, and the different proxies are commonly plotted out relative to each other.

Slashdot Top Deals

I just asked myself... what would John DeLorean do? -- Raoul Duke

Working...