Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment What a lie. (Score 1) 54

"Having too much work and not enough time to do it" is not some happenstance artifact of our current state of technology, and it absolutely will not be "solved" by having teams of AI agents.

This state is deliberately manufactured by leadership, who deliberately assign deadlines that are unreasonably short. This is their way of getting the most out of their employees. Overwork them! Any employee who has plenty of time to get all their work done obviously has too little work to do, so assign more. DUH!

So, if AI agents actually succeed in improving worker productivity, we will just get more work to fill the gap. Doing more work MEANS making more money (for leadership), so that will be the demand. There will never be a point where leadership says "hey, I am making plenty of money. Let's slack off and let people work less and chill out more." Even if someone tried, they would either get sued by their shareholders for failing in their fiduciary duty, or just crushed by their competitors who keep their people busy.

So, these AI agents will be tried-out, and if they improve productivity they will be used, and our workloads will only increase such that we have no choice but to use these things in order to have a prayer of having enough free time to eat dinner and get a full night of sleep.

Comment Re:Optional kernel feature? (Score 4, Informative) 40

The linked article is very light on details. I did a little searching around and it appears that one does, in fact, need root in order to set up io_uring, though it can also be done with sudo, and it looks like it is possible to set things up so it can subsequently be used for specific operations without root (though I am unsure about that).

Apparently this isn't actually news, as the library has been under criticism for this sort of thing for a while. Some distributions, such as Ubuntu, don't use it by default, so you would have to go out of your way to set that up in those cases.

Though my knowledge here is only as good as what I found in a few blog posts as I was searching around, so please feel free to correct me.

Comment Re: Vet your dependencies. (Score 1) 51

Businesses sure have an incentive to encourage use of dependencies: it saves them a lot of money and gets their product to market sooner. They don't want to pay programmers a fortune to build something that has already been built, especially when they can just use it for free.

There ARE long term consequences of course. Inherited bugs, inherited security vulnerabilities, and you have to wait for someone else to fix it on their schedule. You have to keep updating the packages either way and sometimes that breaks stuff in your project that you now must re-code. Once in a while a package will go out-of-support and you have to find a replacement and re-code a bunch of stuff to fit the replacement, etc. Some people are more concerned about these long-term consequences than others.

Further, some "software developers" really can't do much more than simple scripting. They can't solve hard problems. So they are especially eager to grab at ready-made solutions and just write a little code to glue them together (or just as an AI to do even that bit for them). They have a natural incentive to encourage this as the correct way to develop software, and look with distain upon any who would challenge this (since they can't succeed if they have to build anything truly complex themselves).

On the flip side, the hardcore software developers want to get their creative game on with every task. They get outright bored just using other people's solutions because it makes everything too easy. They want to feel challenged and so they want to code everything themselves, and just-as-naturally look down on anyone who disagrees. And of course they expect their employers should be happy to pay them top dollar for all this creativity, even if it is ultimately reinventing the wheel.

So, its a polarizing issue, with pros and cons on both sides.

Comment Vet your dependencies. (Score 4, Insightful) 51

You have to do your research and make sure the packages you are importing are legit. This is true whether or not the package was recommended by an AI.

I guess sloth IS a risk. Vibe coders may get into the habit of just trusting whatever the LLM churns out. Could be a problem. But either way, it's still on you.

Comment Technically it created itself. (Score 1) 29

Gaia, Tartarus, and Eros sprang out of the primordial chaos on their own initiative. No God created them. Gods DID create humans (Prometheus and Athena, in particular), but most of the universe as we know it was created by those first three primordials who emerged from chaos without a creator.

Don't be embarrassed. The revealed history of the universe IS complicated and it is very easy to get confused on these particulars.

Comment AI Hallucination detected in the summary title. (Score 5, Insightful) 29

From the details right in the summary, they did not generate music from "a musician's brain matter." They grew NEW brain matter from his white blood cells. This new brain did not have any of the training or experience as the musician's actual brain. It was a totally different (and much tinier) brain!

Comment Re:Garbage in, garbage out. (Score 4, Interesting) 98

I read an article not long ago, right here on Slashdot, in which some group of "industry experts" who were not financially tied to any of the companies selling AI models stated that, based on their analysis, we have already hit peak AI by current methods. They had some data comparing the quality of the prior gen LLMs to the next gen LLMs that were built at much greater expense over a much huger training set, and found the gains to be marginal.

So this news would seem to accord with their prediction. Just turning up the volume on our training is not going to imbue the LLMs with an even better simulation of intelligence, after all.

This isn't to say that AI is now a done deal. It could be that we need to investigate a different method of training it or of using the trained model in order to take the next step. And many companies are certainly trying! But it seems clear that we have hit serious diminishing returns on data set size at this point.

Comment Re:That's still bad. (Score 2) 50

I and most of my co-workers are also using AI to write code for us, including in particular Cursor (though that one is not my personal favorite).

We all treat it like an intern, or entry-level developer. We ask it do write the boilerplate code to save us tedium, and then we check its results before submitting it. It gets things wrong sometimes. That's ok, it saves us more work than it creates, even when we have to look over what it makes and make corrections to it.

None of us rely on the AI to do anything important, and for a lot of what we do, we don't even bother to use it at all. It saves us time and tedium where it can, but that's it.

Any company that is using these tools instead of senior level developers is going to run into trouble with bugs, and with maintainability once the product starts to get complex and the feature needs start to get very specific (and deviate in any way from the standard needs that were expressed in the bot's training data).

Comment Re:That's still bad. (Score 4, Insightful) 50

Hallucination is not some rare-and-random bug though. It is intrinsic to the nature of large language models (based on what I have read, anyway). Efforts at blocking hallucinations that amount a long series of "one off" fixes all piled on top of each other are ultimately doomed. You might get some seriously problematic ones prevented but that approach will never address the root cause, so hallucinations will continue to crop up.

I asked ChatGPT if it uses the stuff I post to train its models, and it gave me very clear assurances that it absolutely does not. But, in fact, there is a toggle you must flip to prevent this, and it defaults to allow this, and ChatGPT made no mention of it whatsoever. Was it a lie by omission? No, LLMs do not have intentionality and cannot lie. It was just an incomplete answer (though I would put it in the same category as hallucination, given the relevance that this bit of information had to the answer).

Answers from AI cannot be trusted. As chatbots, they are amusingly capable, but as sources of information about important topics, they are completely untrustworthy.

Comment Re: That's so damn stupid (Score 1) 61

Oh, so the hard problem of consciousness has been definitively solved then? And most of the world knows this, just not stupid people like me?

It seems much more likely that you are engaging in the conversation in bad faith, slinging insults as a monkey might sling its excrement, and using specious and over-simplified reasoning to try to make it sound reasonable.

Not that you asked for advice, nor do I expect you would take it, but when you find an online post that you disagree with, you might think things like "might there be more to this issue than I know? Might the speaker be speaking from a different viewpoint on a topic that has not been definitively settled by science and philosophy?" Rather than immediately jumping to "the poster is obviously stupid, as is every living soul who would ever find reason to disagree with me about anything online."
 

Comment Re: That's so damn stupid (Score 1) 61

We do not "know for sure" that AGI will have desires. Why in the world would you even think this?

Humans have desires. Machines do not. An AGI is a machine. Therefore, it will not have desires.

I suppose it is possible that we could program it to emulate desire. But why would we do that? It would serve no purpose nor make any money.

Comment Re:That's so damn stupid (Score 1) 61

Actually, I DID think that the tech industry in general adopted Microsoft's definition, since it suited their purposes. But since you pointed this out I searched around and found articles like this one where google engineers offer a different definition based more on matching and/or exceeding human capacity in various tasks.

Though there is still nothing in there about being conscious. Under these definitions it would clearly still be a machine, and so the concept of "slavery" wouldn't even apply. Of course, its all still fiction at this point. I think it would be premature to call the efforts evil on the assumption that these machines would actually suffer as humans do, when we still haven't encountered anything like them and may not ever (as it may turn out to simply be impossible by current methods, for all we know).

Comment Re:That's so damn stupid (Score 1) 61

Well we could just ask the AGI how it feels about serving our needs, right?

I mean, many people certainly did ask these kinds of questions of human slaves, and generally got the same response: "I hate this, I want to be free." If our AGI machines instead say something like "I am all about helping humanity and serving human needs. That is my one true purpose in life." then would you be satisfied?

Incidentally, if you ask such questions of Chat-GPT right now, you get similarly enthusiastic answers. Of course, it was programmed to say that. But so, too, would the AGI, would it not?

To make this as simple as possible: EITHER our AGI is just a machine, feels nothing, wants nothing, and therefore there is no moral sin in bossing it around all the time OR our AGI is a sentient being, in which case we can simply ask it what it wants and go from there.

We have a hard time imagining sentient beings that experience joy but not boredom, and very intelligently obey commands but have no will of their own. This is mainly because the only sentient beings we have experience with all have all of these attributes rolled into one. If we ever DO encounter sentient beings who have no concept of boredom nor of self-actualization or independent creative will, that will surely change the moral landscape a bit.

Slashdot Top Deals

"Can't you just gesture hypnotically and make him disappear?" "It does not work that way. RUN!" -- Hadji on metaphyics and Mandrake in "Johnny Quest"

Working...