Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Old Economy (Score 1) 15

Lawsuits are flying from unauthorized use of copyrighted works to train the LLMs, communities are uniting to block data center construction, audiences are fiercely rejecting AI-generated content in various forms of media, prestigious law firms are getting slapped-down by judges for using AI-generated hallucinations in their court filings, students are using it to cheat on homework, creative workers of all varieties hate it for the threat it poses to their job security, and the world is drowning in slop.

It is easy to see why people would be interested in a fund like this. And just as easy to see why people would believe that AI is doomed and the AI bubble is bound to pop catastrophically sometime soon. I previously predicted a bubble pop myself.

But there is a flip side. AI has been usefully applied in many places across many industries. When it is not unwisely applied in ways that make hallucinations harmful, it can actually do valuable work. So, AI is here to stay.

There might be a market adjustment, and it might even happen this year, but I don't think the global economic meltdown from the biggest bubble-pop ever is actually going to happen. Though over-valued, the big tech companies that are dominating the SnP500 right now are actually delivering something useful, so even if they sink a bit, they will not crash and burn.

Well, OpenAI might, but that's mainly because it has been outclassed by its competitors and has no real business plan, as was reported this very day right here on Slashdot. Others will survive, though.

Comment Re:Mostly agreed, but... (Score 4, Informative) 32

If you are building solutions in the Microsoft Azure Cloud, it is very easy to get immediate access to GPT models to power your AI pipelines (whatever they may be). Very affordable, too.

By contrast, you cannot gain access to the Gemini models, and there is a big hill to climb to gain access to Claude. I don't know about Meta's models, I never checked.

My point being, this is a bit of a vendor lock in that makes use of GPT models the path of least resistance for many businesses that are building AI powered solutions. Maybe that will help. Though I think not for long.

GPT models are weaksauce compared to Gemini and Claude. They have been very far surpassed by these. Businesses that really need the power of these other models can use Google Vertex and integrate that with their Azure cloud, or set up an Anthropic account and just beam the web requests right over. Anthropic is problematic in that it doesn't allow you to ensure that data never leaves specific global regions (which many people need for legal reasons), but Google Vertex sure does.

So, I think that advantage that OpenAI currently has will not last long.

It is sad to see an innovator lose out, but that is also how things normally go. We tell ourselves feel-good stories about how copyright law or patent law can protect the small innovator against the huge corporations, but that isn't how things play out in realty. By hook or by crook, the major players wind up leveraging what they have to get control over the shiny new thing, and that's how the cookie crumbles.

Comment Re:Deeper than food safety (Score 1) 205

A common complaint that arises when discussing pollution is "we ordinary people cannot do anything to significantly reduce pollution. It's all on the big corporations!"

Well, here is something you CAN do: eat lab-grown meat instead of regular meat. It hugely reduces the pollution for which you are financially responsible.

So here is a new opportunity for environmentalists to soul-search. How devoted are you? Are you ready to put your money where your mouth is (so to speak)?

Comment Re:Well then, (Score 1) 23

how governments around the world continue to push the narrative that they are servants of the people

When they bother to push this false narrative at all, they usually go with utilitarianism. They maintain that a lot of people benefit a lot from the presence of the data centers, and that outweighs the few people who suffer a little from increased utility costs.

They could also go with "rich people are people too, and they are obviously more important than poor people, so serving the interests of rich people IS serving the interests of the people." Though saying that sort of thing out loud tends to reduce their level of voter support, so the smart ones avoid it.

Comment Re: I wish that... (Score 2) 146

Maybe "programmers" come in different tiers, and AI can only replace the lower tiers. Like, programmers who can only implement simple generic code when given clear instructions (but cannot design solutions themselves and cannot debug very well) would be the bottom tier, and perhaps AI will be able to completely replace them, but still not able to replace higher tier programmers.

If we make some more distinctions:

Mid-tier: can independently design and implement business solutions using existing tools and frameworks.
Upper-tier: can build and optimize the frameworks, tools, operating systems, etc. that other developers can use
Master-tier: can implement and improve upon AI systems.

Hypothetically speaking, someday we might be able to produce AI that can replace lower-upper tier programmers without being capable of replacing Master-tier programmers. Then there would still be work for humans in the field, but not very much. The majority of what humans are doing now could be automated, without necessarily creating a self-evolving system that will ascend to godhood.

Comment Re:Copyright infringes my rights⦠(Score 2) 52

I think it is interesting that you got a troll mod, given the popularity of such notions as "sharing is caring" here on slashdot.

It may be that we are progressing into a copyright-free world, and are just beginning to feel the growing pains that come with that adjustment.

It is popularly believed that copyright benefits the independent creator since it gives them legal protection against big corporations who would violate that copyright, but this has been repeatedly disproved, especially recently with big-and-rich corporations helping themselves to everything they see on the Internet to train their AI, and getting away with it.

But even before that, the majority of copyright licenses have been held by a small conglomerate of rich elites, and NOT by the creators who create the works. In order to have a prayer at making money off your talents, you have to sign those rights away. But now that we have other mega-rich people that see a real path to riches from flagrant disregard of copyright law, we are set to watch a clash of titans over the issue.

The one thing that won't influence the outcome at all will be the opinions that the majority of people hold on the issue, since the majority of people are too poor to matter.

Comment Re: AI Hype needs money (Score 1) 106

I also wonder if their new definition of "best developers" is "developers who rely entirely on LLMs for coding."

With that semantic shift in place, they can hire new cheap greenies who rely entirely on LLMs because they can't code, and who do nothing but cause trouble for the actual competent developers who are manually fixing everything they break, and spin it to sound like progress.

Comment Re:AI Hype needs money (Score 4, Interesting) 106

The experiences reported in these articles are so utterly unlike the ones I have using AI to generate code. It HAS gotten better in the last year, but it is still no where near this capable, for me.

If I give it too many requirements at once, it completely fails and often damages the code files significantly, and I have to refresh from backup.
If I give it smaller prompts in a series, doing some testing myself between prompts, there is usually something I need to fix manually. And if I don't, and just let it successfully build on what it built before, the code becomes increasingly more impenetrable. The variable names and function names are "true" but not descriptive (too vague, usually) and when those mount up the code becomes unreadable. It generates code comments but they are utterly worthless noise that point out the outright obvious without telling you anything actually useful. When new requirements negate or alter prior ones, the AI does not refactor them into a clean solution but just duplicates code and leaves the old no-longer-needed code behind and makes variable names even more weird to make up for it. The performance of the code decays quickly. And on top of all this, it STILL can't succeed at all if you need to do anything that is a little too unique to your business needs. Like a fancy complex loose sort with special rules or whatever. It tries and fails, but tells you it succeeds, and you get code that doesn't work.

Sometimes it can solve surprisingly hard problems, and then get utterly stuck on something trivial. You tell it what is wrong and it shuffles a lot of code around and says "there, fixed" and it is still doing exactly what it did wrong before.

I have good success getting new projects started using AI code generation. When it is just generating mostly scaffolding and foundational feature support code that tends to be pretty generic, it saves me time. But once the aspects of the code that are truly unique to the needs start coming into focus, AI fails.

I still do most of my coding by hand because of this. I use AI when I can but once this stuttering starts happing I drop it like a hot potato because it causes nothing but problems from then on.

I simply don't see how the same solution could reliably make consistent and significant changes to a codebase and produce reliable, performant, or even functional code on an ongoing basis. That hasn't ever worked for me and still doesn't, even with the latest gen AI models.

Comment Re:access millions of computers and devices (Score 2) 54

When morally upright people with some technical competence discover an exploit that can be used as a backdoor, they report it to the vendor so it can be fixed. They don't report it on public media, so the vendor has time to fix it before criminals learn about it, thus protecting everyone who is already using the software. And, in turn, the morally upright and competent software vendor actually prioritizes it for a speedy fix, and does not have the reporter arrested and charged with criminal hacking.

But wealth and power tend to rob one of the "morally upright" aspect. So, when government agents discover exploits, they immediately weaponize them and keep them a secret, use them for nefarious purposes under the veil of government secrecy, and lie to everyone as needed. Similarly, tech company leaders shoot the messenger in a misguide effort at mending their wounded pride.

These facts motivate people to not bother, and enable evil to thrive.

Comment Re:Congratulations (Score 1) 162

It seems the intent of my original post was not clear.

Yes I know Libre Office exists, it's the one I use at home. My point was that Big Tech is "all in" on AI across the board, driven by an obvious eagerness to eliminate human software developers from the creation process. They see only the money they can save.

But the consequences of actually achieving such a goal would undermine their own business models. The reason why they would no longer need programmers is the same reason why no one would need their products.

We are still nowhere near that point yet, despite the enthusiasm that they are trying to drum up with stories like this one. Their agents can create a C compiler. Well today I asked cursor to move several methods from a file that had gotten too big out to a separate class, making the methods public and static in the process, and updating references. This entire operation involved a grand total of two code files and barely any "thinking."

It started generating powershell scripts to do batch operations on the files and screwed that up, wiping them out entirely, then tried to retrieve copies from git which wasn't set up for this project, and started showing inner text generation about trying to reconstruct the files just from the content in the chat history, when I stopped it. I had the whole thing backed up because I am no fool and also the Cursor interface gave me an undo button which worked.

So, these AI that are so capable they can create C compilers can't even move a handful of methods from one file to another, without destroying the whole thing.

AI is nowhere near ready to replace us.

Comment Re:Congratulations (Score 0) 162

When will 16 AI agents be able to code me up a Word processor with features equivalent to Microsoft Word?

Because once they can do that, people can stop buying Office and just vibe up their own versions. So long as the agents can implement standard file formats, the differences in implementations won't matter.

An interesting future is being teased here; one in which the only tech giants remaining will be the makers of AI, and everyone else will just vibe up all the software they now pay through the nose to get.

Comment Re:Betteridge and hyperbole (Score 1) 50

And people DO like AI, when it is directly serving their needs. AI-powered apps are popular in all markets where they appear. People like asking them questions and getting answers more quickly than if they went spelunking through search results on the Internet. People like chatting with chatbots, and especially people like generating silly art using prompts. This stuff is all over the place!

But people DON'T like paying full price for content that should be high-quality, but is in fact cheap AI slop. Slop is fine as a hobby, but not as a consumer product. All industry leaders WANT people to like this, and will do absolutely everything they can to convince us to like this (including outright forcing it on us whenever they can), because of all the jobs they will be able to cut. But it doesn't change the fact that people still don't like it.

And they don't like losing their jobs to AI either. And they also don't like the impact on electricity bills and environmental quality.

So, as usual, the situation is more complex than "people don't like it." And because of this, something as simple as an advertising push doesn't indicate a bubble pop.

Comment Has become? (Score 1, Interesting) 61

There has never been any point in history, including pre-recorded history, during which humans acted in a fair and reasonable fashion as a group.

Those in power (whether their power be derived from wealth or political influence) are held to a different standard from everyone else. It has always been this way. It was this way when we were hunter-gatherers living in forests and caves. It was this way before we even qualified as humans.

This is not a quirk of culture or circumstance. This is a property of human behavior. More fundamentally than that, this is a property of pack-animal behavior.

So, you can expect to see such injustice continue into the foreseeable future.

Comment Quite false. (Score 3, Interesting) 105

I have made quite a lot of money investing in the stock market, and you can too. People who know what they are doing gain the option to retire early.

You don't do it by day-trading or other stupid things. One makes money in the stock market through wise, tried-and-true buy-and hold strategies. It takes many years for the profits to amount to much. That is more than most people can bear. Everyone wants to be an overnight millionaire. Nope, that's not going to happen on the stock market.

Day trading puts you in direct competition against high frequency traders, where you are completely outgunned. It is folly. Buy-and-hold investing yields excellent returns in the long term, but requires discipline, patience, and a proper education in the relevant details.

Comment My $0.02 (Score 1) 299

Christianity has an enormous number of sects. They all disagree with each other on just about everything. That's going to include this point about wealth.

Of key relevance is this: it doesn't matter what the scriptures say. Every sect has their body of beliefs, and if you quote scripture to them they will just tell you that the verse must be interpreted "in the context of" their established doctrines. They will pick other verses that they consider foundational and say that THOSE are the only verses that can be taken literally, and the verse you just quoted is metaphorical or exaggerated for effect or otherwise doesn't mean what it seems to mean.

All Christian sects do this. Some insist that the Bible is their only source of doctrine, and others freely admit to the use of other sources and "tradition" or more plainly "the church is the source of doctrine and the Bible is just a book we study under the church's authority." But even the "sola scriptura" sects will refer to hermeneutics in one form or another to explain why your verse does not mean what it seems to mean.

Slashdot Top Deals

"By the time they had diminished from 50 to 8, the other dwarves began to suspect "Hungry." -- a Larson cartoon

Working...