Forgot your password?
typodupeerror

Comment ah yes... secure software development... (Score 1) 33

It's hard enough to get actual developers to properly consider security. Not surprised at all that vibe coders don't.

Plus, of course, most of the training data is insecure to begin with.

But let them learn by fire that there's a reason actual programmers take time to ship a product, and it's not that the AI can type faster.

Comment Re: What I don't like about Dawkins (Score 1) 374

There is no room for it to manifest in a computer program. There is no room for any "magic" in computer programs.

That's true for classic software in a trivial way, in the sense that a sequence of logical inference steps (i.e. a deterministic symbolic program) do not reflect upon itself.

However it may be possible that the computer program is not conscious, but the computer running the software is. LLMs in particular generate their output not from the specific instructions included in the program, but from the weights trained in the model; the software instructions are a requirement for the weights being interpreted, but the outcome doesn't necessarily follow the rules of a formal system and an inference process.

Current LLMs do not have consciousness because their processing is too simple for it to emerge; not because the software substrate is deterministic and mathematical. If the base software were processing the weights of the model in ways similar to how neurons generate brain waves, it is plausible that the emergent system-level information patterns appearing at the data level could exhibits the attributes of consciousness, including self-perception and self-reflection. This is true even if the computer software is deterministic, in the same that the neurons in our brain behave in deterministic electro-chemical ways.

Comment Re:The lab-rat audience. (Score 1) 74

You might do better with Xanax than psilocybin.

After the third medical professional in the same room within the same hour asked me about the medications I was on during my last physical, it dawned on me that my “none” answer from a person my age living in the Untied States of Pharmaceuticala was such an unbelievable outlier that I had to be asked thrice.

You and me both. I had to go get some antibiotics for Lyme disease at an urgent care center a few years ago, and was met with the same disbelief. "You're telling me you don't take any maintenance drugs?" "No sir, I am not".

They gave me a prescription, and a lecture on how I needed to get to my regular physician to get these life extending drugs. I thanked them, and ignored them.

And most of my peer group is on multiple drugs, all are impotent. When the conversations tend toward appreciation of the opposite sex, they all note that "those days are long gone". Sorry Pharma, even if your drugs made me last a little longer, I'll happily give up a few months for the ability to get frisky with the missus.

since I appear to be doing much better than those who started the trend of snappin’ zanny bars like they were Slim Jim’s. We have generational side effects being born from manufactured ones now. The long-term SSRI future, will not even be measured in brightness.

Oh gawd, No one should be taking Xanax. Once on any of these benzodiazepines, if you want to stop, you might die. A nurse friend wanted off them, took her a year. Suggesting Xanax is the same level of society as suggesting heroin for a headache.

Jordan Peterson, intelligent and interesting, regardless of whether we like his politics or not, was terribly messed up by Klonopin withdrawal He was prescribed to counter anxiety when his wife was being treated for kidney cancer. He's a shell of his former self https://www.aol.com/news/jorda... In irony, his wife is doing okay now, he is not https://www.aol.com/news/jorda...

Comment Re: What I don't like about Dawkins (Score 1) 374

On the contrary, it means that neuroscientists have measured precise ways in which brain waves of vision and audio processes converge into taking decisions before the person reports being conscious of taking such decision; and that they have studied precise ways in which altering the brain chemistry affects how the person mental started. Just look for the papers on these experiments for these topics.

Comment Re:The lab-rat audience. (Score 1) 74

Crazy amount of psychopaths downvoting this. I guess mind-altering drugs are real.

Medications became fashionable. When everyone is medicated, no one really sees a problem with medication. And may never again.

I know what you mean. We are at the point where this drug inundation is doing active harm. We are at a point where antipsychotics are now mainstreamed for women, and there is a bit of a side effect loop with the weight gain and tardive dyskinesia (uncontrolled jerky movements) being treated with another drug.

High blood pressure treated with meds that make men impotent, and isn't there something a bit sinister about drugs whose efficacy is determined by you almost fainting when you stand?

Social media became profitable. When everyone is a narcissist, no one really sees a problem with narcissism. And may never again.

(When they used to talk about the next generation bringing a “new” type of “normal”, I don’t think any historian ever thought that would ever boil down to simply dismissing what abnormal is, while pretending nothing bad will happen as a result.)

I'm pretty certain that at some point, it will be a collapse or radical transformation of social media, or a collapse or extinction of western civilization. The present dynamic is not a sustainable dynamic.

Comment Re:This is a systemic problem, not an isolated one (Score 1) 41

Your comment about administrators is absolutely right.

I'm in Europe, where the problem is less pronounced. Still, over the last 20 years, the ratio of non-teaching staff to teaching staff has gone from 2:3 to 3:2. Those numbers don't look dramatic, but consider: It used to be that 100 teaching staff had 66 admin staff. Now that same 100 teaching staff have 150 admin staff, so 2.5 times as many. Not that our teaching loads have been reduced - much the contrary - our classes are now larger. You have to fund the bloat somehow.

Reminds me of the place I retired from. Once upon a time, there was around a 1:1 ratio of engineers/scientists to support aides. It ended up morphing into a 5:1 ratio.

But next came the bean counter boom. What was once handled well by a small group, became the largest division in the institute. And always demanding more accountants. They even embedded accountants in the other groups after sucking up every cent of the overhead money. That way they could suck up more money yet.

Weirdly enough, I ended up doing most of the accounting for my group, and I'm not an accountant. And I couldn't get the professional development I was contractually required to get because there was no overhead any more.

I once joked that they were going to hire a 6 figure bean counter to keep track of the 5000 dollars a year spent on pencils. Turned out to be prescient.

>

I am reminded of the famous quote: "The bureacracy is expanding, to meet the expanding needs of the bureaucracy."

Ohh, I like that one!

Comment Re:Define "conscious" (Score 1) 374

The problem is that we can't define consciousness. No one can agree on what it means, or whether it means anything at all

No way. We may not have a full scientific understanding, but neuroscience has made huge advances in how consciousness emerges in the brain and how it is affected by the changing conditions of its low-level processes.

We cannot say that machines at some point will never have similar emergent patterns that could become conscious. But we for sure can say that the current ramblings of text generation from LLMs definitely can't be conscious, because they are created directly by much simpler low-level deterministic computations.

The long LLM-generated dissertations that people mistake for conscious reflections do not come anywhere near from the complex introspective processes that we know are involved in having consciousness; they are just mechanic pattern generation from the highly compressed encoding of human culture one which they have been trained. It's true that our own brains do learn by highly compresssing our live experiences, but we know for sure that our consciousness involves something more than just compiling memories.

Comment Re:Conciousness isn't as mysterious as you thought (Score 1) 374

What he is saying is that it "looks enough like actual consciousness that it must be it", but that is not sound reasoning.

Something can be functionally equivalent enough to the real thing to give the impression of being the real thing without actually being the real thing.

That nails it. Too many people think that AI models are either Pinocchio or Frankenstein, a constructed being who gained a life of its own, becoming friendly or terrifying; when in fact the current batch is nothing more than The Wizard of Oz, faking the appearance of an awesome entity because some human behind the curtain benefits from making you believe that.

Comment Re: What I don't like about Dawkins (Score 4, Insightful) 374

If it can, then it breaks the deterministic behavior of the known and understood physical components.

What makes you believe that? Our current best understanding of consciousness is that it's an after-the-fact rationalisation of the multiple low-level brain processes that converge into a subconscious decision. If that's the case, consciousness doesn't influence the external world in a non -deterministic way.

If LLMs are not conscious it's because they don't have this high-level aggregate feedback loop, not because consciousness needs to be non-deterministic. All their outputs are created from low-level reactions, like the reflexes of an amoeba that grows in its environment towards the gradient with more food.

Comment ah, the old consciousness thing... (Score 2) 374

Problem is: We don't even know what consciousness is.

So the best we can say is if something creates the impression of having one, based on whom we attribute consciousness to, i.e. other humans. Well, big surprise that a model explicitly trained on human language and texts creates that impression. It does show just how good the models are. At pretending to be human because they have a shitload of examples on what humans would say.

For all we know, the gas clouds on Jupiter could be conscious, just in a way that is completely baffling to us. We can't rule it out because we don't know what consciousness is, so we can't test for it.

Slashdot Top Deals

/earth: file system full.

Working...