Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:the flip side of evolution (Score 1) 24

You underestimate the cost. Even among those that survive for a few generations, most will eventually succumb to changing environmental conditions. Consider trilobites.

OTOH, that's judging by assuming that the present is the correct time-frame to evaluate from. Why should that be true? Trilobites lasted a lot longer than we're likely to. (But we've got the *potential* to last until the heat death...*IF*... But what are the odds?)

Comment Re: Probably a real and strong effect (Score 1) 121

Thinking about this more, my first response was so incomplete as to almost be a lie.

You *cannot* know reality. All you can know is a model of reality. So when you say "reality" you're actually using a abbreviation for "in my model of reality".

And when I said "physics is physics" I was so oversimplifying as to almost be lying. Consider "flat earth" vs. "spherical earth". How do you know which belief to accept? The direct sensory data seems to imply that "flat earth" is the more appropriate belief. There are lots of arguments that "spherical earth" is a better model, but those are nearly ALL based on accepting what someone else says. We are told of experiments we *could* do that would validate it, but very few people have, themselves, done the experiment. So for just about everyone the "spherical earth" model is a "social reality".

Similarly I accept that I have a spleen, but I do this because others have told me it's true. I'm also told my tonsils were cut out, but I was unconscious when this was supposed to have happened, so I'm taking other people words for it.

Reality, as we know it, it largely a social construct. We don't know just how completely it's a social construct, but that's hugely what it is.

Comment Re:Sure...like comparing a bicycle to a motorcycle (Score 1) 121

It's not ChatGPT's fault, but her loved ones are VERY concerned.

It's not ChatGPT's fault, it's the fault of the people who let her have access to this stuff. But that in turn is related to the issue of privacy vs culpability vs freedom. It's problematic to snoop on people, and we want to let people do what they want insofar as that's feasible (not harming others is one common standard which is useful) but do we or don't we share blame when we enable harm?

And so I'm going to say of course we do, if we could have known we were causing the harm, and since the operators of these public LLMs are using the input from users as training information they are on some level examining it, and I don't think using only automated methods to do that absolves them of some consequent responsibility. They're exploiting an emotional addiction and harming another for their own benefit, they know they are doing it hypothetically — and to the extent that they are ignorant of their impact, that ignorance is willful.

Comment Re:Seriously? (Score 1) 121

Stupid people have smart children and vice versa, so it's not clear that you can breed your way out of this problem. Certainly we cannot evolve more intelligence faster than the AI industry can invent more stupidity.

It's unclear how we can make people understand at a useful level that the illusion of intelligence does not equal intelligence. This has been a problem even with actual human output, why would people suddenly be able to tell the difference when they are talking to software?

Comment Re:Who the fuck is Linus? (Score 1) 69

Putting aside the rest of your comment here,

I hope Linux actually has a succession plan for when The Almighty Linus dies. Somehow I donâ(TM)t think canonization is gonna help the merge schedule.

There is such a plan. Whether it can succeed is another question. This is a legitimate and worrying concern. It's a shame you chose to pose it as you did. I think this should be thought about more anyway, though.

Comment Re:ChatGPT has been a lifesaver for me (Score 1) 113

What kind of schema do you have in which deleting a single entry ends up deleting license keys for 2000 customers? That makes no sense?

It would certainly be a schema failure, but if they had a bunch of customers (or customers' licenses, but they said "to retrieve customer information" and not "...license information") in some kind of group and overused ON DELETE CASCADE, I could see it happening. As you say, it would make no sense to use that feature for such a grouping (which might be deleted someday) because you would be creating a situation where you could cause a failure like this...

Comment Re:Absolutely not (Score 1) 113

The reason that LLMs do these things is because it has been trained (for the overwhelming part) on human output.

No. Doing this is orthogonal to what it's trained on. Even if you trained AI only on exceptional output which was all created by software it would still hallucinate things that don't exist. This is a fundamental limit to the technology and until it's addressed somehow (whether stopping it, making it recognize it and recalculate, or something else — whatever solution actually is feasible) even feeding it 100% high-quality input will still result in hallucinatory output.

Comment Yes and kinda (Score 2) 113

I use LLMs for my own amusement, they are useful for that.

I have little to no visual memory so I struggle to draw even simple things. I can do drafting-style sketches OK because they are logical, but just remembering the shape of a curve even between looking at the thing and looking at the paper is difficult. So I use AI to generate images and feel zero remorse about it, since it lets me do something I cannot otherwise do — envision a concept I can imagine, but cannot picture.

For answering questions, I find that they are good only for pointing me in a direction for additional research, because you cannot trust what they say. I knew this before actually trying to get them to help me find out facts, because I asked them about things I knew about and they spat out a mixture of accurate information and absolutely invented bullshit that looks like a correct explanation but actually bears only a passing resemblance to reality. It cites laws and rules that don't exist, and it makes up entire concepts which it will never mention again unless you ask specifically, then it has a roughly 50/50 chance of either inventing some bullshit background for the fake thing or explaining that it's not a thing without irony.

Slashdot Top Deals

The reason that every major university maintains a department of mathematics is that it's cheaper than institutionalizing all those people.

Working...