Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re: Useful If Verified (Score 1) 247

I'm as certain as it's possible to be that LLMs are not the path to more accurate AI. They will be a part of it, but statistics just doesn't work that way. They are as accurate as they will ever be. Something else is needed to correct their errors, and to the best of my knowledge nobody knows what that is as yet. These Companies keep claiming they've found it but every time their claims don't stand up to any scrutiny. That's not to say it won't happen in the near future, nobody can predict when that sort of breakthrough will happen. But equally it may not happen for decades if ever. We don't understand how human thought and reasoning works well enough to emulate it yet.

So my answer was based on the models we have now. That's the question that was asked after all. Predicting what may happen in the future is a fool's game, but right now the models we have aren't good enough for anything but the simplest of problems, and only usable on problems that a human has already solved. Using them is the equivalent of looking up the answer to a question every time, and not bothering to remember that answer or understand the subject on the basis that it will always be available to look up. This is also making predictions about the future that may or may not come true.

Your machinist is fine for as long as a CNC lathe is available but without it may be useless. That's still better than a programmer relying on current AI because that lathe can at least do every job conceivable in that domain. In the programming domain AI can't yet and may never be able to.

Comment Re: Useful If Verified (Score 5, Insightful) 247

I think the point here is that it can be useful for someone who isn't really a coder, to produce just about usable simple things. But if you are a coder it's not really faster. Maybe in the short term, but in the long term it's worse.

As somebody who's been coding for 40 years, in multiple languages, I already have my own libraries full of the vast majority of stuff. These libraries are not just fully tested already, I understand them thoroughly and so can write the small changes to them needed for each project. I only need to test the changes, not the main body of code. If I use LLMs for it, I need to test every single bit of it every single time, and because I'm not learning the algorithm myself I make myself dependent on the LLM going forward. Even on those occasions where I don't already have something similar in a library it's better to write it and understand it myself rather than rely on an LLM, just in case I need something similar again in the future. And if I do the code I've saved will be fully tested in advance.

So in summary an LLM can be useful if you don't code often, and can speed up work in the short term. But using it will prevent you becoming fully experienced and will mean that you are always slower than someone like me no matter how many years' experience you get, because using it prevents you from gaining that experience. And it will always be useless for larger, more systems level projects because it is too unreliable and if you don't have my level of experience you won't be able to spot the thousands of subtle bugs it would put in such a project.

Not that most systems level projects aren't already full of these subtle bugs, but that's a whole different problem caused by Companies not wanting to pay people like me what I'm worth.

Comment Re:50/50 (Score 1) 191

Due to a disability, I can only type with my right hand. It definitely slows my typing. But, I have learned to work around this. For example, I use "sticky keys" which lets me type Shift, Ctrl, Alt, and release those keys. so, Ctrl+C is typed by pressing Ctrl then releasing it, then pressing C then releasing it. The regular way also works while "sticky keys" is enabled.

Comment HUGE FOR HUMAN DEV EMPLOYMENT (Score 1) 84

This is important because what it does is it shows that hiring a human software development engineer is going to be required for that to be considered a business asset or to have some kind of lack of liability from it, Because a non-human won't have the same rights and therefore cannot sign off on transferring those rights to a company.

It also means that code written by AI has Liability that code written by humans does not, Because code written by humans is free speech but code written by AI is not.

Clearly separating human freedom of speech from AI software output also allows us as humans to differentiate between the human that can be hired to do the work legally and the robot that the company bought to maintain profits by not having to hire a human being considered a corporate liability; And it shows that hiring humans to code gives your company rights you don't otherwise get from an AI because an AI is not creating free speech.

It also carves out a dedicated place for professional tech people in a world where you can buy a version of C-3PO or JARVIS to code for you.

Comment Re:That sounds about right (Score 4, Insightful) 167

"doesn't affect me personally so fuck it".

Spot on. Gardening Copywriter does not mean what it says directly. It means the person is a copywriter. Just happened to work for a gardening magazine. A information source written by people. If AI takes over all the text we are going to live on regurgitated stuff from now on. Like eating your own shit because it it is recycled.

As for "I am not too sad"? The guy is just a goddamn psychopath without feelings for other people.

Comment Re:I really f*cking hate those things... (Score 1) 42

You can also, in most cases, use the 'back' button on your browser after clicking on something, then use the 'Don't recommend channel' option. It doesn't aways work, sometimes the reccomendations refresh. I haven't tested, but I think this might be down to the amount of time it takes you to realise the video you clicked on is shit. So the quicker you realise this the better. There should really be the option available while watching a video as well, but whoever said that Google were good at UIs?

On quite a few occasions I've had friends decry how they clicked on something problematic, or something they don't want to see, and now their feeds on whatever Social Media site are full of such stuff. I still find it a little surprising just how many people don't realise how this stuff works. On all the sites I've looked at there is an option to train the algorithm by telling it if you liked something or not. If you don't use this option, the only information it has to go on is what you have clicked on. So if you particularly like something then use the option that says something like 'more like this please', and if you dislike something use the option that says something like 'less like this please'. Surely this is just obvious? Apparently not, because humans, even the intelligent ones, tend to be daft in a lot of ways.

Comment Re:Automating creation of spaghetti, not maintenan (Score 4, Insightful) 135

'The whole idea of "spaghetti" code is that programmers can't easily understand what it does. I am not sure that is problem for AI.'

That is in fact a major problem for 'AI' because it doesn't understand anything. It is not parsing the code, understanding how it works and then working out how to add new features. It's looking at how programmers have solved a problem in the past and copying that. By it's nature, it will make code more spaghetti-like and never less. It's also (at least so far) shown itself unable to understand concepts such as scope, and how a new feature may interact with already existing features. Again, this lack of understanding will lead to more spaghetti code.

An aspect that will make this problem even worse is the poor quality of the bulk of code out there. Projects that have been carefully considered, designed and optomised from first principles are extremely rare. Much more common is poorly designed and written code that's been fixed after publication, and usually in a hurry as critical bugs have been discovered. And since memory and storage got cheaper this problerm has only got worse. Spaghetti code is the norm in this industry and so this is what the 'AI's are mostly trained on. Expecting them to write better than the average human programmer shows a complete misunderstanding of how these things work.

Slashdot Top Deals

A verbal contract isn't worth the paper it's written on. -- Samuel Goldwyn

Working...