Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Oh, Such Greatness (Score 2) 131

Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.

From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.

The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.

The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.

Comment Re:Cryo-embalming (Score 1) 74

I suspect that a more fundamental problem is what you would need to preserve.

Embryos are clearly the easier case, being small and impressively good at using some sort of contextual cue system to elaborate an entire body plan from a little cell glob(including more or less graceful handling of cases like identical twins, where physical separation of the cell blob changes requirements dramatically and abruptly); but they are also the case that faces looser constraints. If an embryo manages to grow a brain that falls within expectations for humans it's mission successful. People may have preferences; but a fairly wide range of outcomes counts as normal. If you discard or damage too much the embryo simply won't work anymore; or you'll get ghastly malformations; but there are uncounted billions of hypothetical babies that would count as 'correct' results if you perturb the embryo just slightly.

If you are freezing an adult; you presumably want more. You want the rebuilt result to fall within the realm of being them. That appears to not require an exact copy(people have at least limited ability to handle cell death and replacement or knock a few synapses around without radical personality change most of the time; and a certain amount of forgetting is considered normal); but it is going to require some amount of fidelity that quite possibly wont' be available(depending on what killed them and how, and how quickly and successfully you froze them); and which cannot, in principle, be reconstructed if lost.

Essentially the (much harder because it's all fiddly biotech) equivalent of getting someone to go out and paint a landscape for you vs. getting someone to paint the picture that was damaged when your house burned down. The first task isn't trivial; but it's without theoretical issues and getting someone who can do it to do it is easy enough. The second isn't possible, full stop, in principle, even if you are building the thing atom by atom the information regarding what you want has been partially lost; though it is, potentially, something you could more or less convincingly/inoffensively fake; the way people do photoshop 'restoration' of damaged photos where the result is a lie; but a plausible one that looks better than the damage does.

The fraught ethics of neurally engineering someone until your client says that their personality, memories, and behavior 'seem right' is, of course, left as an exercise to the reader; along with the requisite neuropsychology.

Comment Re:Computers don't "feel" anything (Score 1) 52

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:You're fired! (Score 2) 65

Much as I agree with you from a moral standpoint, from a legal standpoint it is not as cut and dried as you make it out to be.

If you want to make the argument that "data about you" is "your data" that's fine, but the presumption here is that it's the airline's data, and it is offering it freely (as in speech, not as in beer) to the government. Where is the fourth amendment implication? It is not your "house, person, papers, or effects," it is the airline's and they're happy to let the government sort through it.

Comment Re:Icky, but (Score 1) 65

While I agree that this is not something I want the government to be doing, what part of a database maintained by the airlines constitutes your person, house, papers, or effects? If the government demands access that would be one thing, but if the airlines say "hey, wanna buy our data?" and the government says "hell yeah" that is something else.

Comment Re:Computers don't "feel" anything (Score 3, Informative) 52

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Comment Re:We've seen this pattern before. (Score 5, Interesting) 94

That's only very partially true. The uptick in unpaid mortgages gave the house of cards a little tap; but it was the giant pile of increasingly exotic leverage constructed on top of the relatively boring retail debt that actually gave the situation enough punch to be systemically dangerous; along with the elaborate securitizing, slicing, and trading making it comparatively cumbersome for people to just renegotiate a mortgage headed toward delinquency and take a relatively controlled writedown; rather than just triggering a repossession that left them with a bunch of real estate they weren't well equipped to sell.

Comment Re:His Whole Pitch is Safety (Score 1) 73

Apparently, "safeguards" mean "don't let the AI say something that hurts feels" rather than "don't let the AI act in a manner that is dangerous and unlawful." I say this because, apparently, Anthropic's systems have been leveraged by nation state actors for hacking campaigns (though details of this are minimal and read like marketing spiel about how awesome their tools are rather than giving information on what actually happened).

Comment Re:AI code = Public Domain (Score 1) 45

That is how it's been, Those AI tools were trained on open source/public domain content, so any contribution by AI tools must be considered released under public domain. It does not get simpler than that, and current US copyright law has already indicated that any AI created works are not eligible for copyright

That's not the question.

The question is whether the AI-produced code is a derivative of existing code, and the answer is still not resolved.

In some cases, the answer is a clear YES, because the code is a direct copy of something written by someone else. If something like that ends up in the kernel, it will have to be removed when someone notices.

Comment Re: Cost per KG compared to Falcon 9 / Heavy? (Score 1) 68

Agreed he's truly despicable. I'll also agree with dangerous as anyone who has that much money is dangerous by definition. There is nothing wrong with my understanding of ethics or principle. I also think SpaceX succeeds in spite of Musk and not because of him.

With all of that said, I fail to see how anyone's proclivities or politics play into whether or not a company they own will succeed at any given objective. I'd further argue that if you believe that someone is dangerous, you're fucking stupid if you pretend that they cannot achieve things that are clearly within their (demonstrated) capability to achieve, and the only thing you accomplish is convincing people they're less dangerous than they are.

Slashdot Top Deals

"I have not the slightest confidence in 'spiritual manifestations.'" -- Robert G. Ingersoll

Working...