Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:No! But Greed Is. (Score 1) 33

Depending on the state, data centers in other states can still impact your prices. A lot of power is traded on inter state markets, so local companies might be selling more of their power or it is more expensive for them to buy others... but also, the various inputs (fuel and specialized equipment) are also seeing a jump in price as demand for those increase too.

Yep, any increase in demand is going to affect prices even if it's not in your location as costs for supply will increase. This increases for everyone not just the people next to datacentres.

Not that the OP didn't also have a valid point.

Comment Re: Really? WTF? (Score 1) 25

True enough, which is one of the reasons why I'm using Google less and less. It just irritates me that the company thinks it knows what I want better than I do, especially when I know that their 'summary' is artificially slanted towards whatever the company WANTS that I should believe rather than what's actually real.

Comment Re:media (Score 2) 41

"Secret trick destroys AI" is bullshit. What is not bullshit is that for less common tokens, the conditional distributions of their occurrence in language depend on a relatively small number of examples. This is not an LLM property, it's a property of the language data itself. Also known as the hapax problem. Any language generator, including LLMs, is constrained by this fact. It has nothing to do with the architecture.

In practical terms, this means if you have a learning machine that tries to predict a less common token from some context (either directly like an LLM, or as an explicit intermediate step), then the local output will be strongly affected if it sees a single new context in the training data, such as when someone is poisoning a topic.

There are no solutions for this in the current ML learning paradigm. The system designers can make the system less sensitive to the tokens in the training data, but this comes at a price of being less relevant, due to deliberately discounting newly encountered facts against a implicit or explicit prior. Your example falls in this category.

It is fundamentally impossible to recognise the truth and value of a newly encountered datum without using a semantic model of reality. Statistical language models do not do this.

Slashdot Top Deals

Chemistry professors never die, they just fail to react.

Working...