Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:History (Score 1) 170

I've followed this. Utterly insane. Meanwhile there's coal mines just a couple dozen kilometers away, and they're trying to stop a factory that makes electric cars - and by "stop", I don't mean just "waving banners", but literally charging through police lines and clambering over fences. What utter clowns.

Comment Re:History (Score 1) 170

and forced governments to create labor laws to protect workers

Abuse of workers didn't start with the industrial revolution. Mass organized industrial action against it started during the industrial revolution.

At the start of the Industrial Revolution, workers' rights and conditions were abysmal. The majority of the working population lived in poverty, and their rights were non-existent or severely limited. There was no minimum wage. Workers were paid a pittance. Workers commonly earned 6 pence to 1 shilling per day, about 1-2 pounds in today's money. Workdays were commonly over 12 hours. Children started work at 6-8 years old, often hard labour, such as in mines and mills. Workplaces were dirty, cramped and hazardous, and workers were exposed to toxic substances and physical danger, with few safety measures and zero recompense for workplace accidents. There was zero employment protection. No benefits or pensions. Labour unions / worker organization was rare and often illegal. Criminal sanctions were often leviable for minor mistakes or infractions, creating a huge power imbalance between employers and employees.

The industrial revolution didn't see the introduction of this environment - it saw its abolution.

not to mention that the need for raw materials fueled colonial expansion and all the abuses that went along with that

They were already abusive colonial powers. Just look at, say, the horrors imparted on the new world by the Spanish. If anything, views towards colonial subjects became more tame during the Industrial Revolution - more of a "we're doing this for their own good, they'll thank us when they're uplifted". Still horrible, and of course with industrialization the west became more capable, because the industrialized world was one of abundance. But that's just simply a standard case of "with great power comes great responsibility".

And the Industrial Revolution is where pollution and anthropomorphic GHG emissions began to seriously ramp up as the world transitioned into using fossil fuels

Again, with great power. But human abilities to utterly destroy aren't new, we just scaled them up. I don't see a lot of mammoths or giant sloths these days, do you?

You know why the UK turned far afield for things like lumber? Because they had finished clear cutting most of their forests in the 15th century. Pre-industrial. Again, the industrial revolution gave humanity great power, and did so before they had a full grasp of the consequences of their power. But it did not start humanity's destructive trends - instead, it led to the flourishing of science and education, which led to a greater understanding of the consequences of our actions, and ultimately the birth of the environmental movement. Nobody out in the 16th century would have been rallying to, say, save the leatherback sea turtle. They would have happily hunted it and its eggs to extinction.

Comment Re: History (Score 1) 170

404.

But if so, then yes it's a "single case", but not at all " very willing" as some sort of general rule, let alone in a way that doesn't just require better training data**. If you could convince and get refunds en masse, then everyone would be doing it. It's just not happening.

  ** - Actually, that's being too generous, because most of the "AI agents" out there aren't even trained, they're just ChatGPT told to roleplay something and given some basic info (this is thankfully starting to change).

Comment Re:History (Score 0) 170

People keep saying this, and meanwhile, AI keeps getting better, because, surprise surprise, (A) the data sources that get weighted the heaviest are those with the highest quality-filters**, (B) trainers impose their own filters, (C) preexisting datasets continue to exist and can be used at will, and (D) it's entirely a myth that synthetic data is harmful; some degree (indeed, increasing degrees) of synthetic data are quite useful, so long as some fresh data continues to enter the system.

Re, D: put a group of scientists from around the world on a well-stocked desert island to debate an issue of interest for a month. Do you think they'll come out dumber? No, of course not. Synthesis, bouncing ideas off each other and learning from that, is absolutely a way to learn, to draw new conclusions from preexisting knowledge. You may know "Blue whales are mammals", and "mammals make milk", and through synthesis deduce "blue whales make milk". Etc.

Now, if you put said scientists on a desert island for millennia (let's pretend they're immortal, and ignore the other issues with the analogy), with writing things down being forbidden, and no new sources of information: yes, there will be loss of information over time, eventually offsetting what they gain from synthesis. Their minds will still be coherent, but facts will slowly leak out of the system. New information input into the system is also important.

Re: **(A), much of the internet is in effect filtered. Look for example at this website, which isn't at all remarkable. Yes, sometimes spam bots make it in in the comments, but they eventually get kicked out. Even when they get in, they get modded down. Article submissions are also moderated by editors. Now, an AI might do such a good job with its comments or submissions that it doesn't get noticed, but if so, so what? If it's doing as good or better than humans - and it's disadvantaged, by probably coming from a limited subset of IPs, maybe having a recognizable personality, etc - then GREAT, sounds like good training data.

Maybe I'm an AI right now that's been given old hacked Slashdot users' accounts as part of a botnet, tasked to try to mimic their past personalities while trying to convince other users to support AI development. And maybe I'm mentioning this fact to try to throw you off the mark so that you don't think it's true.

Comment Re:History (Score 1) 170

Indeed, the original Luddite movement was really an amazing mirror to that of today's anti-AI crowd. They were FURIOUS that their hard-built-up-skills were just being copied by soulless machines, who they saw as producing inferior copies en masse and leading to mass unemployment that was going to destroy society.

And of course, they were completely and utterly wrong. The Industrial Revolution was unambiguously a good thing. Standards of living skyrocketed across the board. With less labour devoted to drudgery, more flowed into education, science, medicine, etc etc, and discoveries took off. Unemployment dropped. The average work week, rising before and at the start of the Industrial Revolution, reversed course once machines became common and started heading strongly downward. It was very much a good thing. Efficiency in production is very good for quality of life.

But in the meantime, the Luddites were outraged. And they became increasingly violent, moving from protests and letter-writing campaigns, to threats, to physical attacks against factories, their staff, and their owners. But it didn't change anything.

Comment Re:History (Score 3, Insightful) 170

I feel we're in the same situation we were in the 80's again where computers replaced typewriters, and the people making ink and paper were flipping out. So we kept printing stuff just because old people wouldn't embrace email. There are still people who print every email to read it.

What a weird analogy to make in opposition to AI. Or do you think that the people who refused to switch to computers are the good guys in this analogy, rather than Luddites refusing to adapt to the times?

AI chatbots are very willing to give refunds, so start asking for refunds for the littlest annoyance. "paint chip on my 300 dollar thing, refund me the $2 dollars needed to paint it"

Show me a single case of a person actually getting a refund for something like this.

Comment Re:History (Score 4, Informative) 170

I mean, let's be clear: it gets easier to train AIs every day, both on the hardware and software level. What's a massive corporate project one year becomes an easy community-funded project the next. And finetunes and mergers of preexisting foundations are things anyone can do already. Including applying new techniques to make preexisting foundations more capable.

People simply cannot stop this. Even if you get Google, Microsoft, OpenAI, Anthropic, Mistral, all the Chinese players, etc etc etc to stop... it isn't going to stop. And as an FYI to these "pause AI" people, almost nobody in the indie AI development community gives a rat's arse about "safe AI". They want uncensored tools that do whatever they're told. So if you end up moving more development away from companies that have to deal with PR flak as encouragement to stay safe, and shift more to randos on the internet, well, you're being counterproductive.

Comment Re:How good is it? (Score 2) 28

That's one part of the problem.

Another, far more serious one, is that the input quality is deteriorating with every generation of AI. The first AI models only had human generated input to digest. Granted, some of that was complete drivel, but in general, the information level was pretty good. Sure, you also had conspiracy nuttery running rampart, but it was clearly labeled as such because conspiracy nutters usually label it THE TRUTH or some similar bull, so there's a consistent pattern that AI can latch onto.

The output AI generated was, well, hit or miss. It may be ok, it may be good or it may be one of the dreaded "hallucinations". Output that looks ok at face value but when you read on, you notice that it's complete garbage. Not just when it comes to accuracy, but simply weird, random ramblings of a madman. Something you'd get from the diary of an inmate of a mental asylum. It was hard to tell that from the rest, though.

And what's even harder is to tell AI generated content from human generated content. It's very hard to detect it with automated tools (like, say, AI), as we have seen with the difficulties universities had with students using AI to write their papers.

What adds to the problem is that AI is way faster at generating content than humans. Actually, faster even than humans could audit and vet it. Flooding the internet with AI garbage has become a realistic threat.

And newer models of AI will now use that drivel as input for the next round of AI model learning. And the quality will go down.

With a hint of bad luck, we'll wake up in a world where reality and what is being said about it has nothing to do anymore since most content is AI generated, based on the fever dreams and hallucinations of prior AI generations, with far too little "real" input to be more than a statistical noise element, eliminated by an AI model that considers that insignificant portion of diverging information the error rather than the last vestiges of actual information.

Slashdot Top Deals

"Gotcha, you snot-necked weenies!" -- Post Bros. Comics

Working...