Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Yet they have 6 million slop articles (Score 1) 27

This doesn't seem right. So some obscure language might not have an article at all because someone hasn't written it in that language or facts are different or missing from one language to the next?

Despite how it seems to you, that is both right and correct.

It's correct because that's how it works, and it's right because requiring that articles in a given language be written by someone who speaks that language is a requirement for it to be known whether they are slop.

Seems wikipedia should be taking all these different language articles merging the most factual details from each into a master article and then creating translated articles

If you want translations, use a translation tool.

If you want details to be propagated from articles in languages you don't speak into the articles in languages you do speak, then make that happen.

If you don't want to put in the time to account for the barriers in place to prevent slop articles, Wikipedia doesn't want your input. Make your own encyclopedia. You may use Wikipedia articles as your starting point. GLWT!

Comment Re:People will oppose this (Score 1) 50

It's me. I will oppose this. I do not want loud buzzing machines flying around my neighborhood whenever someone feels like a coffee. Neither do the birds at my feeder or the butterflies at my flowers.

I can't find any clips, but there are few scenes in the Amazon series "Upload" depicting *numerous* delivery drones flying all over the place. Many are carrying Amazon boxes and (I think) there are even some Starbucks drones. It's noisy and actually a bit unsettling, though fits with the satirical dystopian tech future of the comedy.

Comment Are the subjects comparable? (Score 3, Insightful) 14

To point out the obvious, this isn't necessarily evidence of malfeasance. If you look at code contributions at a company, you'll find that a small number of code reviewers miss a disproportionate number of bugs, too, but it is often because they're reviewing code that is hairier than the stuff that other folks are reviewing, making the review process harder.

Are these papers similar to the average paper that the journal(s) normally publish? Are these papers that most people would have refused to review because they seemed questionable even at a glance? Are these papers in areas that are so specialized that nobody can adequately review them, and only a few people were even willing to try?

Do certain groups of authors tend to request the same reviewers because they've worked with them in the past, and is the higher rate of retraction correlated with higher rates of retraction by those specific groups of authors? Or are reviewers assigned randomly as they should be?

Are those reviewers' acceptance rates similar to the acceptance rates for other reviewers? It says they reviewed 1.3% of papers published by the journal and accounted for 30% of the retractions, but that tells us nothing about whether they had a higher acceptance rate than other reviewers. They could easily have published a smaller percentage of papers because they rejected *more* papers, but reviewed papers in areas with a higher rate of mistakes or disagreement about methodology (e.g. maybe they review a disproportionate percentage of meta-analytical papers).

Are these papers being retracted because of things that should have been obvious from reviewing the paper, or were the reasons obvious only after getting more information?

The portion of the (paywalled) article that I could read seems like at least some of these are likely to be situations where authors and reviewers were inadequately independent, which is problematic. This is a strong reason to require at least one randomly algorithmically picked peer reviewer for all papers, chosen by the journal.

Comment American drone dominance? (Score 1) 50

"We are going to unleash American drone dominance," [U.S.] Transportation Secretary Sean Duffy said ...

Over who/what? Presumably he's talking about more drones flying within the U.S., so this, unsurprisingly, literally makes no sense.

... you may get a Starbucks cup of coffee from a drone," Duffy said.

Isn't that already the case. #BaristaSlam (Hah! Joking - joking - baristas.)
More seriously, can't imagine it arriving fresher than getting it at a shop...

Comment Re:Pointless and Dangerous Stunt (Score 1) 155

Apollo's heat shield worked because of the aerodynamic properties of the CM.

Something besides the ratio of surface area relative to mass?

You cannot put nuclear fuel in a reentry capable aerodynamic body.

Clearly you can, because you could easily add 1 kg to the internal mass of the Apollo capsule and it would still be able to safely re-enter the atmosphere.

Everything you have just described has effectively rendered the fuel as unusable. We're not talking about an RTG. This stuff needs to function as reactor fuel.

Why? I mean yes, eventually, but you can put the reactor up there inert, put the fuel up there inert, and have a manned mission to assemble the thing. Nothing inherently requires that the reactor be active or in a ready-to-activate state during launch.

Besides, you need to be able to do a safety inspection with a CT scanner or similar to verify that there are no weld failures or other damage caused by the launch process or the landing process, or else you risk the thing immediately spewing radioactive steam as soon as you turn it on, and contaminating the reactor vessel in such a way that renders it irreparable, all because you cut corners, so you're probably going to want to have a manned mission to activate it anyway, or else some very high-end robot tech. Either way, you should be able to come up with a way to then unwrap the fuel and install it into the reactor after you've safely landed the whole thing on the moon, because that's relatively simple compared with all the other stuff that needs to be done before you can safely start up the reactor.

Your armchair physics expert take on this is absurd.

You're making a huge number of very questionable assumptions about how this should be done, and dismissing my comments based on those flawed assumptions. I'm not the one being absurd here. There are ways to do this that are very, very low risk. Whether they choose to do it that way or not is a different question.

Launching fissile material into space is dangerous. Period.

Not particularly. U-235 has a half-life of 704 million years. This is not the stuff that makes reactors scary. It's the short-half-life byproducts that are super dangerous to be around.

The NIOSH workplace exposure limits for Uranium are 0.05 mg uranium per cubic meter. That means as long as the explosion evaporates the material over at least 20 million cubic meters, even if it evaporates into the air, you're not likely to cause too much harm. This is only about twenty empire state buildings worth of air, by the time you're flying at an altitude where fuel could realistically evaporate, it should evaporate into many orders of magnitude more air than that.

And realistically, AFAIK, no failed spacecraft has ever completely evaporated during reentry other than tiny satellites that are designed to do so, so that isn't a realistic concern anyway, IMO, unless you're planning to ship fissile material inside tiny satellites, and realistically, probably not even then, given the quantities involved.

Or to put this another way, if the entire 1 kg chunk of U-235 got somehow flattened out into a sheet (so that the uranium wouldn't shield you from most of its own radiation) and you were to lie down next to it, you'd still probably get less than the equivalent of one chest x-ray per hour of radiation. Mind you, I wouldn't want to leave a kilogram of uranium lying around on a children's playground, but realistically, the swingset is probably more likely to kill someone. The risk is nonzero, but not so nonzero that it's worth worrying about, IMO.

That doesn't mean it doesn't need to be done, but you acting like it's no risk, waving your hands to make the risk disappear, isn't helping a fucking thing.

From a safety point of view, the highest risk would be it landing on the ground somewhere, and some terrorist finding it and stealing the nuclear material before the government does. And given that there's only a 29% chance of it hitting land, and maybe a 0.1% chance of any given land region being within a short distance of a terrorist cell, I'm not sure that's worth thinking about too hard, either.

The risk is nonzero, but it is so laughably small that I'd be more worried about the spacecraft physically hitting someone and killing them on impact than the tiny amount of U-235 killing someone.

I guess if the spacecraft missed you by a few meters and you somehow didn't die from the dust cloud, the criticality event from the impact might give you cancer someday, but...

Comment Re:Onsite generation (Score 1) 49

Grid conditions are highly variable, and if you're in the AI biz, you aren't gonna want to shut down your LLMs for a heat wave.

There's plenty of "AI"-related processing which could be delayed and nobody would notice. Training of new models, for example. You get [access to] a new model a couple days later and you won't even notice, because you get it when you get it already. Google is also sufficiently distributed that they can simply move this processing to another location, since both the queries and the results are very small and there will be no appreciable delay associated with doing the processing far away.

Comment Re:Going for gold... (Score 2) 104

Focus group results are subject to two pretty obvious problems. One is that the kind of people who want to do them and have time to do them are not usually the people you actually want input from. Two is that the criteria for selecting focus group members can be selected for the purpose of getting a desired result, you read research that says certain types of people want certain things and then you select people like that to give positive feedback for your shitty ideas.

Comment That was not inadvertent (Score 1) 13

Inadvertent? I do not think you know what that word means.

A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing.

USERS SET THEM TO SHARE PUBLICLY
THAT IS NOT INADVERTENT
IT IS A CHOICE

TL;DR: GFY clickbait clowns

Comment wat (Score 1) 104

Microsoft has published a new video that appears to be the first in an upcoming series of videos dubbed "Windows 2030 Vision,"

Microsoft has consistently failed to implement any of their visions for Windows, ever, except making it a privacy nightmare. Seriously look at Windows history, every time they try to make substantive changes to Windows they fail. They could not even bring us a more featureful filesystem. Now we're supposed to believe anything they say about future Windows? I refuse unless they tell us it's going to kill babies, that I could believe.

Slashdot Top Deals

"It is better for civilization to be going down the drain than to be coming up it." -- Henry Allen

Working...