Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment How cute. (Score 1) 11

It's adorable how they pretend that the 'well being' gap between the people who matter and the ones who don't is some sort of surprise that calls for urgent action; rather than a deliberate outcome carefully achieved.

It's the pandemic-period numbers that are the anomaly, from a period when at times downright existential issues forced people's hands(at least for white collar workers; if you are 'essential' good luck and back to dealing with the public in person); and a lot of work has been put into rectifying that period.

What's next; a comparative analysis of the labor markets of the 1950s and the 1980s that studiously pretends that it's not exactly as Milton Friedman and Neutron Jack intended?

Comment Perspective probably dooms him. (Score 3, Insightful) 184

In a sense his puzzlement is justified; when the tech demo works an LLM is probably the most obvious candidate for 'just this side of sci-fi'; and, while may of the capabilities offered are actually somewhat hollow (realistically, most of the 'take these 3 bullet points and create a document that looks like I cared/take that document that looks like my colleague cared and give me 3 bullet points' are really just enticements to even more dysfunctional communication) some of them are fairly hard to see duplicating by conventional means.

However, I suspect that his perspective is fundamentally unhelpful in understanding the skepticism: when you are building stuff it's easy to get caught up in the cool novelty and lose sight of both the pain points(especially when you are deep C-Level; rather than the actual engineer fighting chatGPT's tendency to em-dash despite all attempts to control it); and overestimate how well your new-hotness stacks up against both existing alternatives and how forgiving people will or won't be about its deficiencies.

Something like Windows trying to 'conversational'/'agentic' OS settings, for instance, probably looks pretty cool if you are an optimism focused ML dude: "hey, it's not perfect but it's a natural language interface to adjusting settings that confuse users!"; but it looks like absolute garbage from an outside perspective both because it's badly unreliable; and humans tend not to respond well to clearly unreliable 'people'(if it can't even find dark mode; why waste my time with it?); and because it looks a lot like abdication of a technically simpler, less exciting, job in favor of chasing the new hotness.

"Settings are impenetrable to a nontechnical user" is a UI/UX problem(along with a certain amount of lower level 'maybe if bluetooth was less fucked people wouldn't care where the settings were because it would just work); so throwing an LLM at the problem is basically throwing up your hands and calling it unsolvable by your UI/UX people; which is the an abject concession of failure; not a mark of progress.

I think it may be that that he really isn't understanding: MS has spent years squandering the perception that they would at least try to provide an OS that allowed you to do your stuff; in favor of faffing with various attempts to be your cool app buddy and relentless upsell pal; so every further move in that direction is basically confirmation that no fucks are given about just trying to keep the fundamentals in good order rather than getting distracted by shiny things.

Comment Re:Oh, Such Greatness (Score 1, Interesting) 241

Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.

From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.

The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.

The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.

Comment Re:Cryo-embalming (Score 1) 81

I suspect that a more fundamental problem is what you would need to preserve.

Embryos are clearly the easier case, being small and impressively good at using some sort of contextual cue system to elaborate an entire body plan from a little cell glob(including more or less graceful handling of cases like identical twins, where physical separation of the cell blob changes requirements dramatically and abruptly); but they are also the case that faces looser constraints. If an embryo manages to grow a brain that falls within expectations for humans it's mission successful. People may have preferences; but a fairly wide range of outcomes counts as normal. If you discard or damage too much the embryo simply won't work anymore; or you'll get ghastly malformations; but there are uncounted billions of hypothetical babies that would count as 'correct' results if you perturb the embryo just slightly.

If you are freezing an adult; you presumably want more. You want the rebuilt result to fall within the realm of being them. That appears to not require an exact copy(people have at least limited ability to handle cell death and replacement or knock a few synapses around without radical personality change most of the time; and a certain amount of forgetting is considered normal); but it is going to require some amount of fidelity that quite possibly wont' be available(depending on what killed them and how, and how quickly and successfully you froze them); and which cannot, in principle, be reconstructed if lost.

Essentially the (much harder because it's all fiddly biotech) equivalent of getting someone to go out and paint a landscape for you vs. getting someone to paint the picture that was damaged when your house burned down. The first task isn't trivial; but it's without theoretical issues and getting someone who can do it to do it is easy enough. The second isn't possible, full stop, in principle, even if you are building the thing atom by atom the information regarding what you want has been partially lost; though it is, potentially, something you could more or less convincingly/inoffensively fake; the way people do photoshop 'restoration' of damaged photos where the result is a lie; but a plausible one that looks better than the damage does.

The fraught ethics of neurally engineering someone until your client says that their personality, memories, and behavior 'seem right' is, of course, left as an exercise to the reader; along with the requisite neuropsychology.

Comment Re:No need for security (Score 1) 97

1. I got asked once if I played world of warcraft since they say a guy with the name "thegarbz" playing. I said no. By the way I know exactly who that person is because he impersonated me as a joke. I found that flattering and funny, but it has no impact on my life beyond that.

Reminds me of my first email account ;) One of my professors said we all had to register for an email account (this was in the mid-90s) so we could submit our homework to him, so I registered his name at hotmail.com to mess with him ;)

Comment Re:Don't use your REAL phone-number, too risky (Score 1) 34

This is will only make all the people that know you not able to contact you (well, you might consider that a feature, but let's say this isn't what you're going for). First you'll have to contact each of them and go through the whole "who are you?" dance, that is if you don't fall into one of the many options that makes them ignore unknown numbers in the first place, and even if they see your chat or call don't take one of the other deny/ignore/report whatever option, especially after the scary "be careful with unknown numbers like this" message. And then after you iron out who you are with each and every person you might want to chat in the future with 90% won't even save your "alternate" to their address book, and from the remaining 10% if they don't contact you often enough to be at the top it'll be 50/50 chance next time when they try to do something to be on the right number (as it's not a special address book, but the one that's shared for everything, including regular calls and SMS).

Wouldn't it be easier in the first place to not post your status like "I'm off to Maldives with my secretary, losers" and a similar profile picture (switched to "visible to everyone") if you don't want that info to be public?

Also what has anything to do with the tablets? You can have a second (and a third, and a fourth) "linked" device beside the main one. These can be other phones, tablets, desktop apps, or logged in browsers. That changes nothing, it's the same account, with the same things visible (or not), etc. If you meant to take even ONE MORE number for the tablet that's bad, for the reasons above.

Comment Re:Computers don't "feel" anything (Score 1) 55

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:Computers don't "feel" anything (Score 3, Informative) 55

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Slashdot Top Deals

God is real, unless declared integer.

Working...