Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re: 100x attack surface, too (Score 1) 82

Are you claiming no LLMs ever get trained on their user's queries, and regurgitate any of it to other users ?

The use of your data is strictly controlled by the EULA that you agree to.
Paid LLMs nearly always prohibit the use of your conversations to train their models, or offer an opt-out.
Free services- that's another story, but this story isn't really about the legions of jackasses who have replaced Google with ChatGPT, though at least in the case of ChatGPT- you can opt out even at the free tier.

I'm not aware of any free-tier LLM you can use as a coding assistant- though for whatever exception there may be to that rule- those people should definitely make sure they find out where in the EULA their conversation and training is concerned.

Your blanket implication that "using an LLM will involve your data being leaked" is wrong. If you're insulted by you being wrong, I think that's probably a topic of discussion for you and your therapist.

Comment Re:I don't believe the hype, but (Score 1) 82

I'm a Chief Engineer. I'm feeling pretty safe for the time being.
Parent is right though- juniors aren't. They're proper fucked. Which kind of makes one wonder what the long game here is, since I don't see any evidence that LLMs are going to come for my job any time soon, and someone is going to have to fucking replace me sooner or later.

Comment Re:fuck this guy (Score 1) 82

Keep telling yourself that. After 30+ years, between DEI, disability, and AI, I haven't been able to land a job in a year. Job boards are fucking empty.

That's because people like me are doing the hiring, and we find people like you to be overpaid, unqualified, and repugnant.

You can blame DEI, disability, and AI if it helps you sleep at night, but it's none of those things.

Comment Re:fuck this guy (Score 1) 82

if you said it was a "3Mbps Internet connection" at least one side had to be 3Mbps, for example.

This is, and always has been the case.
The problem comes in the asterisks.

1) There's no accounting for throughput between any 2 random points on the internet with their random assortment of intermediate hops.
2) There's no accounting for the overhead past layer-3. The download speeds you are using are layer-6. My router doesn't forward layer-6 datagrams, it forwards layer-3 datagrams.

The "small print" needed to explain to the average person what they can expect to see with their nbps connection without some asshole getting confused and claiming it's all a lie is larger than that asshole would be able to digest, so the problem is unsolvable. Are you one of those such assholes?

Comment Re:No (Score 1) 82

I agree that not everyone who writes code is a software engineer, however...

I got my BSCS at a College of Engineering at a State University.
I am a scientist, and an engineer.
Part of that education was, indeed, software engineering- i.e., the engineering of software systems.

Talking about Silicon Valley's "10x Engineers"- these are guys with a BSCS, and more.
They are Engineers. We're paid stupid fucking amounts of money for being so.

In my own journey to stay relevant as I progressively march toward aging out of the system, I too am looking to augment my abilities with LLMs.
I wouldn't say they've turned me from a 10x Engineer to a 100x Engineer. Probably more like a 12x Engineer. But my ability to wield them is getting better.

I am not alone among my peers- which do not include people like you.
Your opinion, and your moderation, is just someone's insecurity and sour grapes.

Comment Re:"Central" is probably overstating it. (Score 1) 6

I think this would probably definitely be their end-goal. I suspect AMD would like to do the same thing.
99.9% of all difficulty working with the big crunchers is OS+driver bullshit.

If I were them, I'd be looking to make a product where you're really just talking to a combined system via a mailbox with a defined API, rather than trying to deal with the nightmare of virtual memory management across 8 GPUs.

Comment Re:"Central" is probably overstating it. (Score 1) 6

Ehhh, the GPUs don't particularly "chat with one another".
You're right that they have direct communications- but there's no real way to program them to utilize them.
For example, you've got a network layer spread across 5 GPUs- the FMAs can be done on all of them, but the product needs to be combined. You can't program your kernel to do this- it has no way to communicate with the other cards. So the commands to the GPU to send the data over the appropriate pipes to a combining kernel and then redist must come from the CPU. They don't just do housekeeping- they're the conductor of the orchestra.

Comment Re:AI Training (Score 1) 63

So, there are two issues here. One is that one has to actually agree with the conclusion that that thought problem shows that; not all philosophers agree on that. But second, and more importantly this is precisely why I made the point that whether or not these systems are reasoning is a separate claim from whether or not they are solving problems that humans would apply reasoning to. So the Chinese room thought experiment isn't relevant to the question at hand, which is how effective these systems are, not anything about what their internals are doing.

Comment Re: Lets all welcome the USA (Score 1) 107

Dobbs also overturned decades of precedent, and overruled Congress.

Stop saying shit like this.
Most of your post is great- but that part is fucking stupid.

The same exact argument could be made for the overturning of many really fucking bad rulings.
It is the job of the Supreme Court to overturn decades of bad precedent and overrule a Congress that has exceeded its mandate.

Comment Re:News? (Score 1) 77

So, I started to do the math...
lbf to lbf, a low-bypass turbofan and a high-bypass turbofan aren't far apart.
You'll need ~2000lb/s for a JT3D to match the thrust that a GEnx is producing at ~2500lb/s.
However- the rub is that you'll need 4 JT3Ds to match the thrust of a single GEnx (we're doing dry thrust here, as I don't think afterburners are a fair comparison ;)
The problem then, is that those 4 JT3Ds weigh 5000lb more than the GEnx.
So after compensating for the murdered thrust/weight ratio, we're looking at needing 5.5 JT3Ds, giving us a final take-off consumption of ~2700lb/s.

So ultimately, my from-the-hip supposition was correct.
The caveats are important, of course- the only reason what I said is true is because a low-bypass turbofan is so bloody inefficient at take-off (compared to a high-bypass)

If you were to swap out the GEnx engines on a 787 with enough low-bypass turbofans to match available thrust, the low-bypass motors would be sucking 8% more air.

Slashdot Top Deals

panic: can't find /

Working...