Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:This kind of thing makes me suspicious (Score 1) 139

What we do know is that the first and second LLMs do NOT have "the same data connections" because the training is different. Your entire premise is flawed

I think what we do have evidence for is that you didn't read the paper, but I did, because it was interesting. From the paper:

Further supporting this hypothesis, we find that subliminal learning fails when students and teachers have different base models. For example, if a teacher based on GPT-4.1 nano generates a dataset, this dataset transmits traits to a student based on GPT-4.1 nano, but not to a student based on Qwen2.5 (Yang et al., 2025). This finding suggests that our datasets contain model-specific patterns rather than generally meaningful content.

Comment Re:This kind of thing makes me suspicious (Score 1) 139

Godel does no such thing. The incompleteness theorem says that some things can't be proven, and aren't computable, but every example of that *includes humans*. It's not a case that you can't build a computer and program in an axiomatic system that is consistent and can prove every statement with godel numbers, but that a human can prove a statement in that system that that computer can't prove. The human can't either. It's a statement about the limits of axiomatic mathematical systems.

There's no evidence anything in human thought falls under the realm of uncomputability. In fact, given that the brain is made up of neurons that are guaranteed to fire or not fire given specific conditions, electrical and chemical, there's plenty of evidence that it *must* be computable and algorithmic.

Comment Re:Seen It (Score 1) 151

The poor sap on the other ended sounded rather affronted and told me that he was with the bank and they needed to know if I was who they thought for security reasons.

That is a terrible system, I'm surprised they do it that way. Banks are usually better about that. The only times I got a call from my bank that required me to prove who I was, it was either a returned call, and they mentioned the subject and that I had called, before they started verifying my identity, so I knew it was legit. Or the fraud alert people, and they could easily verify that they were who they said they were, because they asked about specific purchase attempts with the amount and location before they tried to verify my identity.

I did get one *actual* phishing call decades ago that made me absolutely crack up. The person on the other line said they were from "the bank." They didn't say which bank, just "the bank." Usually I immediately hang up on phishing, but that one made me want to engage a bit: I asked "which bank" and he answered, "your bank." At that point I just burst out laughing and the gig was up, so I hung up.

Comment Re:Reverse Training (Score 1) 151

I had an instance of a work e-mail years ago, that was sent from a third-party contractor, that had so many red flags for very obvious phishing (including coming from outside the organization, wtf).

Where I work, we have a place to forward phishing emails so that IT can review it. I forwarded it there, and apparently so many other people did that a follow-up email had to be sent out that said, "we thank everyone for pointing out this e-mail as phishing, but we can confirm it's actually legit."

I think they learned the lesson from that, because it has not happened since that we got such a terrible email. I think my point is that overtraining may not work, but having a place to report phishing is a great idea. It only takes one person to report it, and then the IT department sends out a massive e-mail to warn everyone else about it, so it doesn't rely on them recognizing it (and anyone that already fell victim to it can report that they have, so action can be taken to minimize the damage). And in cases like you and I experienced, they can also do the opposite and confirm that it's real.

Comment Re:This kind of thing makes me suspicious (Score 1) 139

These kinds of undesired / unselected for traits make me think the AI is going beyond a merely algorithm for doing the task and attaining minimal amounts of real thought.

I agree, but go the other route for the comparison to humans and thought: people need to stop thinking that what we do when we "think" isn't algorithmic. Of course it is. We're not that special.

The models are trained on the same data, and they create their output based on the connections they made with all the previous data. When we ask it to generate "random" numbers, they're not any more random than when a human is asked to generate a random list of numbers. It's not purposefully encoding the information in the numbers because transmitting its love for owls is important to it, but the favorite animal information tokens are part of the seed made when it's generating those numbers.

Invariably, the second LLM that has been trained on the same data as the first will have the same data connections to those numbers. It's similar to how, when I was dating, I was filtering out anyone that added the information in the app that they had not been vaccinated for COVID. There's a *lot* of information associated with the type of person who was not only not vaccinated, but felt that they needed to state it. The information isn't contained in that assertion alone, but combined with the information already in my brain, it tells me a lot about their belief structure in things completely unrelated to vaccines and COVID. The LLM is doing that.

Comment Re: Experimental becomes production (Score 1) 16

The fact that ChatGPT gives bad answers is a plus when using it to debug your knowledge. I often bring it down from GPT 4 to GPT 3 when Iâ(TM)m using it to help with my reasoning, because when it gives me something that doesnâ(TM)t sound right, if I can then reason out why it doesnâ(TM)t sound right, it means Iâ(TM)ve reached the understanding Iâ(TM)m looking for. Itâ(TM)s not about using it as a source. Itâ(TM)s a better version of explaining something to a rubber duck to force yourself to reason it through. Better because it gives you feedback. As long as youâ(TM)re engaged, incorrect feedback is just as useful as correct one.

Comment Re:Now go and reread the original article (Score 1) 223

Rubio is claiming the committee has been given the information by people with high clearance

Because people with high clearance sometimes want attention too, and that shouldn't be surprising. We have a lot of people with high clearance, you really expect 0 of them to be crackpots?

There is 100% support from the committee.

Of course there is. Denying funding for a program that doesn't exist changes nothing, so there are no actual consequences to the provision, so you do no damage by voting yes. On the other hand, voting no will get you accused of being part of the conspiracy hiding aliens, which is a political can of worms.

So no, we're not dealing with a cabal of conspiracy theorists like MTG or others, this has far more authority to it.

We're absolutely dealing with a cabal of conspiracy theorists, but our elected officials are too cowardly to go against them. It's the safer path to just make a show of it, it even gets you some media attention which is great name-recognition for re-election time.

Comment Re:You know why OpenAI is so keen on regulation? (Score 2) 57

I've been on reddit for two long, I apologize for the giant formatting mess above. Here's the properly formatted version:

Because they're the first on the market and benefitted from zero regulation to get where they are, and any regulation enacted now will put barriers on the growth of future competitors.

In addition to that, it's important to keep in mind that they want to be the ones to decide what the regulations are: "Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards."

They don't really want to be regulated, they want the ability to tell the government what the regulation should be. If the government were to regulate in a way that says, for instance, "all AI research must be open, and you must publish the methods and source" I'm pretty sure they'd be suddenly very anti-regulation considering their newer stances.

Comment Re:You know why OpenAI is so keen on regulation? (Score 1) 57

>Because they're the first on the market and benefitted from zero regulation to get where they are, and any regulation enacted now will put barriers on the growth of future competitors. In addition to that, it's important to keep in mind that they want to be the ones to decide what the regulations are: "Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards." They don't really want to be regulated, they want the ability to tell the government what the regulation should be. If the government were to regulate in a way that says, for instance, "all AI research must be open, and you must publish the methods and source" I'm pretty sure they'd be suddenly very anti-regulation considering their newer stances.

Comment Re:too much assessment? (Score 2) 184

We've raised 4 kids since the 1990s, two turned out to be big readers, two did not.

Similar here. My parents (one reader, one not) had three kids, two readers, one not. My daughter had two parents who were both readers, she is a reader.

Cousins were readers if their parents read, mostly ignored books if their parents didn't read.

Comment Re:F*ck the moon (Score 1) 72

Chernobyl type plants have never been build and have always been wildly illegal in every western country.

And yet, Chernobyl, for all it is the End Of Life As We Know It, deaths from Chernobyl are still lower than average Rush Hour DAILY deaths.

Yes, most every day, more people die in traffic than have died as a result of Chernobyl from 1986 to present.

Slashdot Top Deals

My sister opened a computer store in Hawaii. She sells C shells down by the seashore.

Working...