Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re: Time to resurrect the old meme... (Score 2) 249

Look at the full chart.
https://fred.stlouisfed.org/se...

The dollar rose like a rocket from 10/24 to 1/25. Then it reversed and went back to right where it was before the sudden rise. This is just a huge nothing burger. It was most likely driven by hedge funds speculating that Trump would replace Power and dramatically lower interest rates. That didn't happen and the trade reversed. This only looks dramatic due to the article's choice of measurement periods.

Comment Re:asking for screwups (Score 1) 118

How would an LLM accurately determine which cases were "easy"? They don't reason, you know. What they do is useful and interesting, but it's essentially channeling: what is in its giant language model is the raw material, and the prompt is what starts the channeling. Because its dataset is so large, the channeling can be remarkably accurate, as long as the answer is already in some sense known and represented in the dataset.

But if it's not, then the answer is just going to be wrong. And even if it is, whether the answer comes out as something useful is chancy, because what it's doing is not synthesis—it's prediction based on a dataset. This can look a lot like synthesis, but it's really not.

Comment Re:Is this bad? (Score 1) 240

The non-profit controlling the pool can negotiate the user fees based on usage. As for people who bypass the pooled crawler -- every web site should let them make requests and then never respond to those request effectively keeping them in infinite timeouts. Public embarrassment of the bypassing entities will also help control this.

Comment Re:Is this bad? (Score 1) 240

There is a good solution to prevent gaming this. The crawler can use AI to assess whether it wants to pay the price the page is asking or not. It can always decide the price is to high and not add the page to the index. In that case it doesn't pay. The payment is not for crawling, it is for giving permission to be added to the global index.

Comment Re:Is this bad? (Score 1) 240

Another option would be to let each page set a micro payment amount in it's headers. Then the crawler could crawl until they run out of money. This works as a double-edged sword. If you set your micro payment amount too high you are not going to get crawled and then you'll drop out of every search index. So it's your choice. The single crawler would crawl free pages first and then crawl from cheapest to most expensive until it runs out of money. Obviously if you set your micro payment at $100 you're never going to get crawled and you'll never appear in another search engine either.

Comment Re:Is this bad? (Score 1) 240

I would like to see the Google anti-trust trial solve this. A good solution is to have a single crawler for the web (can be Google's via anti-trust settlement) and then everyone pays into a pool to get access to the feeds from that single crawler. Payment into that pool can then be used to make the equivalent of statutory royalty payments to the sites crawled. If you don't want to be crawled put your stuff behind a login. Of course you are going to be sorely disappointed in the amount you get from those statutory payments simply because of the sheer number of web pages (around 47B). Optimistically you might be looking at $0.10 a page/yr.

Slashdot Top Deals

All extremists should be taken out and shot.

Working...