Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Its one or the other. Hybrid sucks (Score 2) 95

Look, there are pros and cons to working from office and working from home. Everyone knows what they are, so I am not going to rehash what is already widely known and understood. Neither of these models is perfect. Different ones will work for different companies depending on their size, stage of growth, what they do, and geographical location.

However, ONE OF THE TWO needs to be selected for any given company, because this whole "hybrid" model is what sucks FOR EVERYONE.

NO ONE wants to commute to an office to sit on Zoom calls - it is entirely counter productive and THE ABSOLUTE WORST combination of both models - however, this is EXACTLY where you end up with a "hybrid" workplace, because you can never guarantee who is exactly in the office and who is not so you are all on Zoom all the time regardless of where you are.

"Hybrid" is what truely needs to die.

Comment They don't get it (Score 2) 54

He says they are going to combat AI Slop, in the same breath as he says they are rolling out tools to make creating it easier.

I don't think this guy understands what most people think "AI Slop" even is. "Remixing existing content", *IS* slop. It is low effort, low value, garbage.

Comment Canada Proves It Works (Score 5, Informative) 86

Canada banned the sale of locked mobile phones in 2017. Since then, every phone sold in the country has been unlocked.

Did financing phones go away and make phones more expensive? No.

Carriers still finance phones, and tie them to plans, it is just decoupled from the device so while you may be paying off the device for 3 years, you can decide to sell it and/or move carriers whenever you want, by paying off the remaining balance.

Comment Re: Well, as it was predicted in 1974... (Score 1) 118

The GROWTH RATE has been shrinking for a decade.

Reports in the 70s were all predicated on not just growth, but INCREASING growth rates (exponential) because that's what had happened until that point. All of those models were wrong. The growth rate has been collapsing even faster than it grew. The global population is never going to reach 9 billion, and the peak estimate is revised down every single year. Its actually looking unlikely we will ever even break 8
5 billion.

Comment Re:Well, as it was predicted in 1974... (Score 1) 118

The fundamental premise of that report was that the global population growth rate would keep growing, which it hasn't -it has been shrinking for over a decade, and is almost to an equilibrium point - after which the population is going to start *shrinking*.

With shrinking population, will come shrinking water demands.

The global population collapse is going to cause a lot of other problems - our economies are not built to withstand it - but running out of water won't be one of them.

The only question we have right now is, will global population collapse happen fast enough to avert the water crisis. It is hard to say, as all prediction models about population over the past 100 years have been inaccurate - we classically over-project and under-project the growth and collapse rates.

Comment Pointless circle jerk. (Score 1) 221

Define what consciousness means in humans and animals.

Then tell me why any of this matters, from a moral or ethics perspective are we treating animals different if they are conscious? Most literature says that human children are not even fully conscious until they are around two years old. SO what does that mean?

How about we figure this out for humans and animals first, before we worry about AI.

Comment We need to crack continuous, incremental training (Score 1) 130

The dirty secret of LLMs is there is still no way to incrementally train them properly.

We fake our way there with context windows and RAG systems and fine-tuning, but the reality is that there is no way to have an LLM that, every morning, has been trained on what it learned yesterday.

Instead, the latest models take months of training on millions of parameters, and when you want to introduce new data, you need to start all over from scratch.

This is the nut that really needs to be cracked with LLMs. If incremental learning was cracked, and we let current models loose, we would have real AGI within days to weeks.

Slashdot Top Deals

When I left you, I was but the pupil. Now, I am the master. - Darth Vader

Working...