Forgot your password?
typodupeerror

Comment 5x, really? (Score 1) 40

Productivity up 5x. So what used to take 5 hours doing without AI you can now do in 1 hour? Consistently all workday, every week, every month, the entire year.

I use several AI services both at work and as a hobby, and while they are useful, they tend to fail when things get a bit complex. Since the technology behind it (Transformers) doesn't understand anything other than calculating the list of next probably tokens, it's a natural limit to what it can handle.

I see your type of claim from time to time, sometimes is 5 times the productivity, other times it's "saves me 6 hours a day" or something outlandish like that. I don't know what you did before you started using AI, but if you did anything other than scaffolding simple web apps, then I don't believe you. I've managed to talk to a couple of people who claimed this kind of performance boost before and when asking about enough details it turns out that it wasn't exactly true.

You do make a good salespitch for Jetbrains and their products, are you in any way connected to that company? Just an extremely happy customer that manages to get AI to work better than the entire planet, even the "AI fluid operators" (lol) or "Expert prompt engineers" crowd?

I use Jetbrain product(s) every day btw.

Comment No, just no (Score 4, Interesting) 49

FOMO, or fear of missing out, seems to be the main sales strategy for AI these days.

An LLM that is the equivalent of a really smart developer?
No. Just no. I use AI both at work and at home and I pay for the better stuff at home. It's a really useful tool. I'm also a former developer. If you honestly think that an LLM is now the same as a smart developer you have no clue on what development is or the skillset necessary to do development.
I also know the technology behind LLMs, and therefor the limitations within the technology itself. Transformers can never replace a job where there is a need for even a minimum level of intelligence, it can't even do common sense.

It's always "The AI is awesome, it's YOU that is the problem" in these sales pitches.
"AI fluent operators"... lol :D

Comment Re:You can lead a bot to solder.. (Score 1) 61

I'm going to do something I usually don't. I'm going to reply, even though I gave up on this thread. I realize that you have a point when it comes to your questions and me not providing a good answer. This reply is probably not going to sway you, but this is more for others who might stumble upon this thread some time in the future.

Ok. So memory and state are connected. When I said that the model has no memory of previous runs you countered by mentioning KV caching, which led me to believe that you either on purpose misunderstood what I meant, or that you just used google/chatgpt for a reply. In case this was just a misunderstanding I'll try to explain why. Memory, when discussing Intelligence (artificial or otherwise), means that previous processing or information impacts the outcome of the current processing. KV Caching exists only to improve performance, it doesn't alter the outcome. If you run the transformer with the same input it will end up with the same output, regardless if you are using a KV cache or not. You just save yourself some compute and time.

Different types of ML models can be trained to extrapolate, and you could of course argue that an LLM "extrapolates the next token in a series of tokens". If we are talking about actual intelligent extrapolation then an LLM can't do that if it's outside of its training data. It is blind luck if it manages to do so.

I also realize that you did not, in fact, state that LLMs can think and reason, apologies for that.

The important fact that most people do not understand is that the Transformer, which is the core of everything going on now in this AI hype, only process input tokens to generate a list of the next probable token. It doesn't actually select the next token, so it has no clue if the sentence will now be about vehicles, amoebas or shades of green. In fact, when the next token is selected, based on temperature setting, and the Transformer is processing the previous tokens, plus the newly selected token from the last run, it has no clue if any of those tokens have been generated by itself, or if it completely new input. That is what I meant by not having state (or memory).
This has a huge impact on what an LLM can and can't do. Don't get me wrong, it's amazing how well it can perform with enough training data and clever implementations on the host side, but the fact remains. It can only create a list of the next probably token, one at a time.

Anyway, I realize I can't provide enough details in a post like this, but I wanted to give a bit more context to my point. You will probably respond like you have previously, but now I have given it a go, and will let this one go.

Comment Re:You can lead a bot to solder.. (Score 1) 61

You said it has no memory and it does. Now you seem to be making a different argument.

It's not a different argument. I guess you are not really motivated to try and understand the difference between a KV cache to reduce compute and the meaning of stateless in this discussion. So I think I'll end the discussion here. You are of course welcome to think that Transformers can actually think and reason, enough people do, and that fuels the hype for the moment.
I'm hoping everyone can get more realistic understanding what the technology actually can do, and what it can't. So we can start to use it for scenarios where it is useful. But that will take time.

Comment Re:You can lead a bot to solder.. (Score 1) 61

Of course it does, that memory takes the form of the KV cache.

The Transformer itself has no state, so no. It has no clue if those tokens were generated earlier by itself or if it's an entirely new input. Doesn't matter where those generated tokens are stored, the actual processing by the transformer has not memory other than the input. Given the same input it will always generate the same output. It is stateless and deterministic.

Comment Re:You can lead a bot to solder.. (Score 1) 61

They very much can extrapolate, this technology would be rather pointless if they couldn't.

That depends on what "AI" you are talking about. LLM's certainly can't extrapolate. The technology is called a Transformer and it assigns a probabilistic value to a list of next probable tokens (word or part of word). The Transformer (model) is stateless and deterministic. It only generates the probability list for a single next token each time it is run and it has no memory of previous runs. It has no clue if the most probably token will be selected or the least probably token (unlikely, but still), that is configured by the temperature settings. So no, it can select probably next tokens, that is not the same as extrapolate anything. It doesn't reason or think like that at all.

Other ML models can, if they are trained for that purpose.

Comment Re:Cannot trust (Score 3, Informative) 37

Compute on encrypted data is more of a theoretical exercise (I know, I know, I mean "not particularly useful") than a practical one. The limitations are so many that it can hardly be called processing. You can't make decisions on data that is encrypted, because then you could figure out what the data is (think 20 questions). You can only do some limited math on specific scenarios.

It's interesting, but it can't really process your health data or much of any real world data imho.

Comment Quality is plummeting (Score 4, Interesting) 35

Before they started using AI the quality was barely good enough, and now, after AI it has gotten way worse. My entire family is using Duolingo, but we've started to talk about getting something else soon. It's also not a very effective learning platform, as you are usually stuck on just learning words (and often a bit odd ones at that).

Using AI might save them some money, but they need to do better than what any AI chatbot can do for you. Gamification will only get you so far, the platform needs to actual help learning a language.

Comment Re:This is such BS (Score 1) 22

I get the point. My perspective is that you can't really ask Clause to "Hack the Mexican government" either. You need to first do intel, which clause can explain to you, but so can a quick google search. Then you need to look for vulnerabilities within the information you gathered during intel, of which there are countless tools that can help you, probably better than Claude... etc.etc..

The stupid thing in this article was the prompt which, for the layman, indicates that Claude can somehow impersonate an "Elite Hacker" and do all this. And that is not true. It can help lesser capable script kiddies, but so can google, and other tools.

Comment This is such BS (Score 1, Insightful) 22

This is just stupid.

If Claude can "act as an elite hacker" and "find vulnerabilities" then every tool on the planet would find the same vulnerabilities. The chatbot is not, in fact, an elite hacker, it's a word (token) generator, and it has no f..ing clue about how to find vulnerabilities. The steps it can generate (token for token) is the same you can find in any Hacking for Dummies or 1337 Hackzor script.
These headlines grow dumber and dumber as the AI companies are desperately trying to get everyone to use their products.

What is next? "I asked the chatbot to act like an billionaire stock broker and it made be billions"
ffs

Comment Autocomplete failed at generating secure password (Score 2) 84

It's frustrating to see these headlines stating the obvious and completely disregarding how the technology actually works. Even here on Slashdot there was a user that insisted on "tHaT Is NoT hOW rEAsoNinG mODelS woRK!" as if there is some magi AI tech that is not known to the world. Transformers work in a specific way. It calculates the probably next or missing token. That is it. There is nothing else.

OF COURSE it can't generate random passwords

Slashdot Top Deals

3500 Calories = 1 Food Pound

Working...