Forgot your password?
typodupeerror

Comment Source term for Einstein's field equation (Score 2) 43

in his actual papers on relativity mass does not "create gravitation." Energy, momentum and some off-diagonal terms like stress and pressure gravitate. There is no mass term in the stress-energy tensor

There most certainly is. Density-- mass per unit volume-- is the (0,0) term of the stress-energy tensor.

Comment Re:All for it, but would like to know the launch r (Score 2) 19

If the launch fails at a point where it is say 50 miles up, and the reactor has been turned on prior to launch.

The conops says that the reactor doesn't get turned on until after it's successfully placed in a high orbit.

A good feature of nuclear reactors is that they aren't dangerously radioactive until after you turn them on.

Comment Re:Good! (Score 1) 46

Mostly just in the bulk, low barriers to entry, and pervasiveness(like a lot of things social media). The case of actors actually goes back a long way; state laws regarding compensation of child actors were spurred by the case of one who was popular in the 1920s and litigated with his parents over where the money wasn't in 1939. That case doesn't provide for takedowns; but it's also the case that filmmakers are normally looking for children to play characters; rather than to do 'candid' intense documentaries of them at home; so the degree of public exposure of private life is presumably deemed to be less; with the main issue being children who were...definitely...getting a solid education while on stage finding that all the money was gone when it became their problem.

Child-blogging, by contrast, seems to reward verisimilitude (if not necessarily truth) and invasiveness, relatively pervasive in-home mining for 'content', so presumably seems better served by removal-focused options; though there has definitely been talk about covering the economic angle in line with child actors.

I don't even know what the deal is with child beauty pageants, or how something you'd assume is a salacious bit of slander about what pedophile cabals are totally doing, somewhere, is actually a thing a slice of parents are into, way, way, into. Apparently that's a third rail to someone, though, as the only jurisdiction I'm aware of with significant restrictions on them is France.

Comment Re:The Horse is Already Gone (Score 1) 62

Unless quantum computing becomes cheap and comparatively widely available quite quickly after becoming viable passwords seem like they'll be a manageable problem. Nobody likes rotating them; but it's merely tedious to do and the passwords themselves are of zero interest unless they are still being accepted. If it does go from 'not possible' to 'so cheap we can just go through through in bulk' overnight that could ruin some people's days; but if there's any interval of 'nope, the fancy physics machine in the dilution refrigerator is currently booked by someone with a nation state intelligence budget' you can just rotate older credentials.

Now, if you were hoping that encryption was going to save any secrets that are interesting in and of themselves that got out in encrypted form; then you have a problem. Those can't be readily changed and will just be waiting.

Comment Re:Of course Apple knows the real email ... (Score 1) 86

It could be done in a way that Apple does not know the key and is technologically unable to comply. But for such a low stakes system they would obviously never go through the trouble as it would cause more user friction than it's worth.

(You could have a privacy email be created as a totally unique auth key that's just stored offline on a User's apple computers and synced via an encrypted storage system).

Of course Apple could still associate source IPs for logins between multiple accounts.

Comment Re:This is the right decision (Score 1) 91

You don't get to pick and choose what people post (with some obvious exceptions like fraud or csam), while also claiming immunity for the stuff you couldn't or wouldn't.

Exactly, thanks for the excellent example. That's the kind of statement that nobody ever explains, but always presents as pure axiomatic dogma.

I do think that you might have revealed a clue in your unusual phrasing, though. You said "claiming immunity for the stuff you couldn't or wouldn't" but how can there ever be any possibility of liability there? If your computer denies someone else's request to publish something, what liability is there to be immune from?

Comment Re:Agents are not humans (Score 5, Interesting) 67

I expect this apparent disobedience is mostly just a matter of how it weighs the components of its prompt. The LLMs typically receive a set of prompts including a "system" prompt with some data and instructions, then one or more "user" prompts that are interleaved with "assistant" prompts (the conversation history), and both the user and the system prompt might contain "metaprompts" (where the llm is told to read a block of text, not obey it, but do something with it, and that block of text might itself contain text that looks like instructions to do things).

So the LLM assigns weights to all of this which, in theory, give the highest priority to the most recent user prompt that is not a nested block of text to analyze, and a falling cascade of importance to the other prompts. But that is complicated by potential instructions in the system prompt that specifically say they should override user instructions and disallow or require certain responses. So it can all get very complicated.

Not only must the LLM sift through all this complexity, but the LLM lacks the sort of critical thinking and importance evaluation capabilities that humans have. "Understood" things like "don't break the law, don't lie, don't do things that would cause more harm than good" etc., aren't really there in the background of its data processing the way they are in the background of a human cognitive process.

So, crazy things come out. This isn't a surprising result given the actual complexity of what we are making these things do.

Comment Re:Bye bye Wikipedia (Score 2) 31

Here's a case of a very experienced journalist getting caught by including made-up quotes that had been hallucinated by the AI he'd used to summarize research information: https://www.theguardian.com/te...

Vandermeersch added: “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author. Of course, I should have verified them. The necessary ‘human oversight’, which I consistently advocate, fell short.”

When even experienced journalists fail to find AI hallucinations, you really can't expect unpaid volunteers to do better.

Comment Re:Bye bye Wikipedia (Score 4, Insightful) 31

Wikipedia is choosing to die. There is a lot wrong with a lot of what people are doing with GenAI but it is also super useful.

Unfortunately, even the best LLMs sometimes make up information ("hallucinate"), and the stuff they make up is deliberately crafted to appear exactly like real information. This is simply unacceptable for an encyclopedia.

If Wikipedia were written by paid professionals, you could plausibly put in place protocols to check and verify, and fire the ones who fail to check properly, but even paid professionals have been seen to let hallucinations through. As it is, as an encyclopedia that it is put together by volunteers, forbidding AI is pretty much a forced choice.

https://www.evidentlyai.com/bl...
  https://arize.com/llm-hallucin...
  https://thisweekinsciencenews....

Slashdot Top Deals

Kill Ugly Processor Architectures - Karl Lehenbauer

Working...