Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re: Huh? (Score 0) 164

He's just being a typical American MORE BIGGER FASTER tool. I drive an 08 Versa with a 1.8l with 122hp and I have absolutely no problem being one of the fastest people on the road, because even a slow ass car by modern standards can do all the things. I never have trouble getting up to speed on a ramp or whatever.

Comment Re:Like His Fat Ass Can Fit In One (Score 1) 164

How bad can a party be when an Orange shitgibbon gets (re)elected as a result of party "missteps"?

Really shitty. I'm arrogant so I do want to point out the obvious: Trump won the primaries and became the candidate for the Republican team because the Republican party is shitty.

Both parties are really shitty.

Comment Re:Like His Fat Ass Can Fit In One (Score 1) 164

That's an example of why they have really bad messaging, because they are more interested in politics than in science/reality. That leaves room for someone like Trump (the reality TV star) to do better messaging.

Biden/Harris were going around saying they wouldn't trust the vaccine. Governor Newsom was throwing large dinner parties after telling everyone to socially isolate. That's a strong indicator of people who don't care about science, and that's why they can't do better messaging than Trump.

Comment Re:Like His Fat Ass Can Fit In One (Score 1) 164

A clear example is the messaging on vaccines and masks. These aren't a matter of scientific debate: if everyone wears a mask and gets vaccinated, the pandemic will be slowed.

But, somehow it turned into "Biden is forcing us to do ..." whereas with a little better messaging, only crazy people would have minded. The problem wasn't the message, it was the way the message was delivered.

Comment Re:Wassa matter China? (Score 1) 91

Yeah, it's not personal, I just feel like you've been caught up too much in the AI hype and that clouded your vision. You are definitely a net positive in the conversation: with interesting ideas and a (unfortunately not more common) ability to actually look things up and learn.

The AI problem will resolve itself automatically in the next few years (either the AI hype will die out or strong AI will be invented; one way or another.)

Comment Re:Way too early, way too primitive (Score 1) 55

The current "AI" is a predictive engine.

And *you* are a predictive engine as well; prediction is where the error metric for learning comes from. (I removed the word "search" from both because neither work by "search". Neither you nor LLMs are databases)

It looks at something and analyzes what it thinks the result should be.

And that's not AI why?

AI is, and has always been, the field of tasks that are traditionally hard for computers but easy for humans. There is no question that these are a massive leap forward in AI, as it has always been defined.

Comment Re:And if we keep up with that AI bullshit we (Score 1) 55

It is absolutely crazy that we are all very very soon going to lose access to electricity

Calm down. Total AI power consumption (all forms of AL, both training and inference) for 2025 will be in the ballpark of 50-60TWh. Video gaming consumes about 350TWh/year, and growing. The world consumes ~25000 TWh/yr in electricity. And electricity is only 1/5th of global energy consumption.

AI datacentres are certainly a big deal to the local grid where they're located - in the same way that any major industry is a big deal where it's located. But "big at a local scale" is not the same thing as "big at a global scale." Just across the fjord from me there's an aluminum smelter that uses half a gigawatt of power. Such is industry.

Comment Re:Sure (Score 2) 55

Most of these new AI tools have gained their new levels of performance by incorporating Transformers in some form or another, in part or in whole. Transformers is the backend of LLMs.

Even in cases where Transformers isn't used these days, often it's imitated. For example, the top leaderboards in vision models are a mix of ViTs (Vision Transformers) and hybrids (CNN + transformers), but there are still some "pure CNNs" that are high up. But the best performing "pure CNNs" these days use techniques modeled after what Transformers is doing, e.g. filtering data with an equivalent of attention and the like.

The simple fact is that what enabled LLMs is enabling most of this other stuff too.

Slashdot Top Deals

A CONS is an object which cares. -- Bernie Greenberg.

Working...