Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Sources (Score 4, Interesting) 43

Our (Iceland) leading source of PM pollution,æ is, weirdly, a dam (KÃrahnjÃkavirkjun). The water is full of rock flour and fine subglacial volcanic ash, which normally would have just gone out to sea. Instead, a lot slowly settles out in the reservoir, and then when the water level drops, dries out and blows away.

Comment Re:I care what's between her legs why? (Score 2) 40

Beyond the above, let's add, first re: government requests:
  * Re, government takedown requests, right-wing-authoritarian Turkey was responsible for half of them, followed by Germany and India. The EFF is alarmed at Musk's near-total acquiescence to takedown requests. But Musk made a giant fuss of fighting against takedown requests for people advocating for the overthrow of Brazil's (left) government (only to ultimately comply).
  * For compliance requests more broadly, from January to June, Twitter approved of 71% under Musk, vs. 18% under Dorsey.
  * An example from India: in January 2023 it blocked links to a BBC documentary critical of Modi ("The Modi Question") at India's request. When asked about it, Musk feigned ignorance ("First I've heard. It's not possible for me to fix every aspect of Twitter worldwide overnight"), but on a BBC interview before that tweet stated "the rules in India for what can appear on social media are quite strict, and we can't go beyond the laws of a country"

In general, he's been *way* less resistant to takedown requests.

Also, let's remember when he first censored links to major Mastodon servers, then censored links to Bluesky. And re: politics, he doesn't just censor the left. For example, in December, a ton of major conservative accounts had their verification and ability to monetize their accounts stripped due to a disagreement with Musk over H1Bs. And Musk admits to shadowbanning. The algo is also in general so blatantly tuned to promote Musk and his politics. At one point they ramped up the boosting of Musk too high and everyone was being constantly flooded with Musk tweets.

Also, it's not at all only been ElonJet that he's banned oppositional journalists over.

And off Twitter, he keeps getting more and more authoritarian. Responding to the listing of a top DOGE staffer "You have committed a crime", responded to Mark Kelly explaining why he supports Ukraine "This is treason", etc. This coming from a person who has massive control over the government. Meanwhile, he shares info that doxxes randos by sharing rightwing "Hate This Person" viral tweets several times every week.

Comment Re:Rich people's new toys (Score 1) 102

Yea but could you imagine even doing this 20 years ago imagine the cost being close to just over a million?

Yes?

What they're doing is VASTLY easier than orbital spaceflight, so it SHOULD be vastly cheaper.

This has nothing to do with the ACTUAL challenges of orbital spaceflight. When your energy and thermal requirements are so low, you can basically make a water tower fly. You have zero actual mass budget concerns. You can be as inefficient, overbuilt, overly redundant, and cheaply made as you want, and still hit the goal. It does absolutely zip to actually advance orbital rocketry technology.

Comment Re:Rich people's new toys (Score 2) 102

**space

All they're doing is going high up in the atmosphere and then falling. It's not even remotely close to orbit. They're going to about 1/4th the altitude and 1/8th the peak velocity of orbit, so it's just a joy ride for a couple minute fall. I don't get why we're making news stories about people taking joy rides.

Comment Re:Funny thing... (Score 1) 18

All they can do is statistics

That is not how LLMs work. You are thinking of Markov models. LLMs are not not Markov models. Not even by the most pedantic description that would rope in even humans if not for quantum uncertainty (they're non-deterministic from a given starting state, thanks to Flash Attention). They do not work by "statistics". Randomness doesn't even come into the picture until the softmax function, which serves as a way to round from a conceptual state partially converged to linguistic space, to different linguistic paths which can represent the concept. It just happens that adding some noise to the rounding process yields better results than no noise (our brains, it should be noted, are *extremely* noisy)

Comment Re: Could be worse. (Score 1) 69

Dropout is not "random change, run fitness function". It's still gradient descent even when using dropout. You just don't look at all of the neurons with each step. Backpropagation is based on gradient descent, always. Not random alterations and then recalculating loss. It'd be lovely if we didn't have to track gradients (they make training memory hungry), but we do, because that's how it works.

Also, dropout's main role is regularization, to prevent exploding gradients and thus avoid floating point precision loss in the softmax that drowns out future weak learning signals. Same with weight decay (though dropout has some side benefits for robustness as well).

BTW, the irony is that "random alterations and then recalculating loss" is actually a theory related to biological neural networks' learning, not artificial ones. It's not a favoured theory, to be clear (Hebbian learning works well enough), but it is a theory.

Slashdot Top Deals

"The pathology is to want control, not that you ever get it, because of course you never do." -- Gregory Bateson

Working...