Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:How much is really delayed maintenance? (Score 5, Insightful) 116

I'm no longer a fan of electric cars after I learned that they don't really solve the smog problem

That's stupid. EVs might not be a 100% perfect fix, but they will dramatically improve air quality. (Including a reduction in o3, a major component of smog). That's a good thing.

EVs are an essential and important step towards a cleaner future. Walkable cities and high-speed rail are ultimately better, but we're not going to see that in the US in our lifetime.

Comment Re:But ... (Score 1) 74

is seeing and describing is akin to raising a young child.

Try to resist the impulse to anthropomorphism these things. They are not independent entities that learn and grow on their own. Neither are they capable of things like consideration, reason, or analysis. This isn't speculation. These are simple facts, things we know with absolute certainty.

One of the differences that Hinton points out:

Is also nonsense, which he should know. He's lost his mind. Neural networks do not have "experiences" in any meaningful way. That's insane. Try this link. That should dispel any absurd notions you might have picked up.

it doesn't mean we automatically understand everything there is to know about it.

This is just 'god of the gaps'. I don't know what it is you think we don't understand, but I can assure you that it's a lot less than you think. We design and build these things, after all. We don't just do random things and hope it works.

Comment Re:But ... (Score 1) 74

You keep making claims without any evidence.

These are basic facts, not nonsense speculation like you've been posting.

Not once have you done any such thing.

You must be illiterate as well. Why are you here? Just to waste everyone's time with pointless bullshit?

Not an easy decision!

If you weren't a complete moron, you could actually evaluate the claims on their merit.

Sorry, kid, your hero has gone senile. Get over it.

Comment Re:But ... (Score 1) 74

Just because you say it's silly means everyone should agree?

I've explained this in depth countless times over the past few years. At this point, if you still believe ridiculous nonsense like that, you're either incapable of understanding how these things work or your ignorance is willful.

His accolades include the fucking Turing award.

That doesn't make what he's said any less stupid.

Comment Re:But ... (Score 1) 74

I do not think that the training data is stored verbatim in the model....

You are correct. The training data is not stored verbatim in the model. Neither is the model some sort of compression tool. Still, occasionally, you'll see it produce verbatim text. The reason is usually because that text was included in the training data hundreds of times. Remember that these are generating text probabilistically on the basis of the training data, making the model more likely to get verbatim text as output even though the training text isn't stored in the model.

Comment Re:But ... (Score 1) 74

They are trying to solve the hallucination problem.

Which is obviously impossible. Again, there is not actual understanding here, which is what you'd need to identify and correct mistakes. (Of course, the basic structure and function of these models means that kind of evaluation is impossible, so whatever hand-wavy ad-hoc definition of "understanding" you want to use doesn't really matter.) So-called "hallucinations" are exactly the kind of output you should expect, given how these models function.

Take some time to learn about what LLMs are and how they function. It'll put a very quick and decisive end to all this silly nonsense.

The LLMs developers may possibly fix this by

Why does every layperson believe they know better than actual experts? As though their ignorance was some kind of superpower. "Those dumass experts couldn't possible have thought of obvious thing! They need my outside perspective!" It boggles the mind.

Comment Re:But ... (Score 1) 74

LLMs have been demonstrated to be Turing complete.
https://arxiv.org/pdf/2301.045... [arxiv.org]

This is how we know you don't have a clue.

You obvious didn't read or understand the paper. That claim depends on infinite precision reals, which are proven to be impossible to realize in physical systems.

You're wasting everyone's time with your unimaginable ignorance. Go away.

Comment Re:But ... (Score 2) 74

What gets me going are comments that seem to completely ignore the possibility the LLMs do in fact represent an intelligence of some sort.

That's because that's not a possibility, it's silly nonsense.

Take some time and learn about how LLMs work. This fact will become obvious very quickly.

When the "Godfather" of AI quits a lucrative position to be able to speak freely about the dangers of AI

He was 75 when he left Google. I won't say that senility was a factor, but he's been spouting nonsense ever since.

Comment Re:Question (Score 2) 80

I may have been good enough 20 years ago, but the part would have been on the verge of being considered NRND back then

In 2004? The z80 was still going strong. The ez80 was just a few years old, after all. That product line is not being discontinued, only the z84C00 line [pdf].

So, no, it would absolutely not have been considered NRND back then. Even today, as a platform, the z80 is as stable as ever.

It's an obsolete part [...] security of supply and programmability

Old does not mean "obsolete". It's been around for ages and a lot of people are familiar with it. Finding software and developers isn't an issue. There's a lot of value to be had from the kind of stability the z80 offered. Again, I'll point out that Zilog will still be making binary-compatible chips, just not ones you can drop into your Colecovision. You'll need to make a simple adapter. That's pretty damn amazing.

Comment Re:false positives (Score 1) 115

That seems unlikely given the absurdly high false-positive rate you get out of these things. You wouldn't have any students left by the end of the semester!

How it works everywhere I've been involved, students submit their work though an online portal which provides them with a plagiarism score a few minutes later. They can then review their work to see what the system flagged. Above a certain threshold, the system (or policy, depending on the institution) won't allow the assignment to be graded, requiring a human review if the assignment isn't resubmitted with a lower plagiarism score before the deadline. That's just how bad these tools are at detecting actual plagiarism.

That said, it would be very difficult to catch instances of intentional plagiarism without them. Yes, despite how simple it might appear to you at first to game the system, it's not nearly as simple as you think.

Slashdot Top Deals

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...