Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Good use. (Score 2, Interesting) 48

The main question is if the plant is still safe. It hasn't been used in years. Is it still in good maintenance? Was the design meant to be idled for years? What are the risks of restarting that particular design of reactor after all those years? Is the land there safe for workers of the plant after reactor 2's accident all those years ago? And what plans are in place to prevent what happened at reactor 2 from happening at reactor 1?

I actually don't know the answer to any of those questions. But I hope experts are actively asking those.

Comment Re:iPhone Unavailable - try again in 1 minute (Score 2) 54

If you are a programmer and you are given clear instructions on what is expected, then yes. If you are a programmer and you are not given clear instructions, then no. However if you are technical lead/architect then you really should be responsible for it.

OTOH if you are a programmer and you raise these concerns then you are on your way to become a technical lead/architect.

In my systems I insist we keep a database table of various common passwords (tens of thousands of these) and we do not allow people using them as well.

Comment Re:Computers don't "feel" anything (Score 1) 39

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:Computers don't "feel" anything (Score 2) 39

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Comment Re:working (Score 1) 23

It is like saying: someone will do some work for free, because they like it, lets then make sure that we take away the product of their work, they don't need it anyway. How is that a moral stance, how is it good economically? People feel a certain way if someone tries to steal from them. One thing is to work, even if you don't have to, but to understand that the result of your work is yours. It is a completely different proposition to enslave someone just because they can survive without keeping the results of their work. Practically speaking, if someone sees this type of attitude, they choose a different jurisdiction to do their work, where there won't be such blatant abuse.

Slashdot Top Deals

ASHes to ASHes, DOS to DOS.

Working...