Forgot your password?
typodupeerror

Comment Re:AI is like a Ouija (Score 1) 68

That's the thing with metaphors, they have similarities but there are also points of divergence. Point is, my metaphor was not meant to be understood as a technical description of the system's workings.

For the unsuspecting soul who approaches this modern oracle without the faintest idea of how it works, the experience of facing unexpected demons could serve as a warning of the dangers they may face if they approach the tool without caution.

Comment AI is like a Ouija (Score 0) 68

People compare AI and robots with Frankenstein's monster (or with Pinocchio, on a good day, if they want to give the story a positive spin), the construct which gains a life of its own.

But current LLM chats are more aptly compared with a ouija board. The machine itself is inert, and you can see it as a playful activity. But the model contains within it the highlights of a whole culture compressed during its training. You can access the souls of all the authors whose works were used for learning; but also of all the internet fanatics, trolls and scammers. When you set the machine in motion, you never know whose spirit are you invoking to answer.

Comment Re: What I don't like about Dawkins (Score 1) 400

There is no room for it to manifest in a computer program. There is no room for any "magic" in computer programs.

That's true for classic software in a trivial way, in the sense that a sequence of logical inference steps (i.e. a deterministic symbolic program) do not reflect upon itself.

However it may be possible that the computer program is not conscious, but the computer running the software is. LLMs in particular generate their output not from the specific instructions included in the program, but from the weights trained in the model; the software instructions are a requirement for the weights being interpreted, but the outcome doesn't necessarily follow the rules of a formal system and an inference process.

Current LLMs do not have consciousness because their processing is too simple for it to emerge; not because the software substrate is deterministic and mathematical. If the base software were processing the weights of the model in ways similar to how neurons generate brain waves, it is plausible that the emergent system-level information patterns appearing at the data level could exhibits the attributes of consciousness, including self-perception and self-reflection. This is true even if the computer software is deterministic, in the same that the neurons in our brain behave in deterministic electro-chemical ways.

Comment Re: What I don't like about Dawkins (Score 1) 400

On the contrary, it means that neuroscientists have measured precise ways in which brain waves of vision and audio processes converge into taking decisions before the person reports being conscious of taking such decision; and that they have studied precise ways in which altering the brain chemistry affects how the person mental started. Just look for the papers on these experiments for these topics.

Comment Re:Define "conscious" (Score 1) 400

The problem is that we can't define consciousness. No one can agree on what it means, or whether it means anything at all

No way. We may not have a full scientific understanding, but neuroscience has made huge advances in how consciousness emerges in the brain and how it is affected by the changing conditions of its low-level processes.

We cannot say that machines at some point will never have similar emergent patterns that could become conscious. But we for sure can say that the current ramblings of text generation from LLMs definitely can't be conscious, because they are created directly by much simpler low-level deterministic computations.

The long LLM-generated dissertations that people mistake for conscious reflections do not come anywhere near from the complex introspective processes that we know are involved in having consciousness; they are just mechanic pattern generation from the highly compressed encoding of human culture one which they have been trained. It's true that our own brains do learn by highly compresssing our live experiences, but we know for sure that our consciousness involves something more than just compiling memories.

Comment Re:Conciousness isn't as mysterious as you thought (Score 1) 400

What he is saying is that it "looks enough like actual consciousness that it must be it", but that is not sound reasoning.

Something can be functionally equivalent enough to the real thing to give the impression of being the real thing without actually being the real thing.

That nails it. Too many people think that AI models are either Pinocchio or Frankenstein, a constructed being who gained a life of its own, becoming friendly or terrifying; when in fact the current batch is nothing more than The Wizard of Oz, faking the appearance of an awesome entity because some human behind the curtain benefits from making you believe that.

Comment Re: What I don't like about Dawkins (Score 4, Insightful) 400

If it can, then it breaks the deterministic behavior of the known and understood physical components.

What makes you believe that? Our current best understanding of consciousness is that it's an after-the-fact rationalisation of the multiple low-level brain processes that converge into a subconscious decision. If that's the case, consciousness doesn't influence the external world in a non -deterministic way.

If LLMs are not conscious it's because they don't have this high-level aggregate feedback loop, not because consciousness needs to be non-deterministic. All their outputs are created from low-level reactions, like the reflexes of an amoeba that grows in its environment towards the gradient with more food.

Comment Re:Do the home owners (Score 1) 162

Using the waste heat makes much more sense in a new development, as the properties would be designed to make use of the waste heat rather than having to retrofit it later alongside a conventional heating system.
You would assume that the server farm would have its own connectivity, and having installed it they could use the same physical lines to provide service to the residents, so long as it's optional and you're not forced to use this specific provider (their service could be terrible).

Submission + - AI finds signs of pancreatic cancer before tumors develop (nbcnews.com)

fjo3 writes: An AI model developed at the Mayo Clinic in Rochester, Minnesota, detected abnormalities on patients’ CT scans up to three years before they were diagnosed with pancreatic cancer, according to research published this week in the journal Gut.

The scientists behind the model, which is now being evaluated in a clinical trial, trained it by feeding it CT scans from patients who had been screened for other medical conditions then were later diagnosed with pancreatic cancer. The team then had radiologists review the scans and compared their ability to find early signs of cancer to that of the AI model. The model was found to be three times better at identifying the early signs.

Comment Re: Yes (Score 1) 192

A lot of school systems are set up to memorise answers to exam questions, rather than actually understand the topic.
So the homework doesn't need supervision of a teacher because the kid doesn't need to understand the content, he just has to keep reading it until he remembers it.

Ideally you should be taught the topic properly, and the teacher is around to make sure that you actually do understand and aren't just repeating memorised answers.

Comment Re: Yes (Score 1) 192

Noone needs homework.
If you're having to do work at home after school, then it means the teacher hasn't done their job of teaching the stuff in class.

What you're seeing is when the classroom is a poor place for learning, due to disruption from other kids such as bullying, or a class which moves at the pace of the slowest kid. All of these are faults of the school and teachers, not something to pass on to the kid.

Slashdot Top Deals

The moon is made of green cheese. -- John Heywood

Working...