Forgot your password?
typodupeerror

Comment Re:Almost as if... (Score 2) 26

unable to consume material as rapidly as they did in the distant past

It's almost as if time slowed down around them the more they eat...

That's not the reason. Time slows down (from the perspective of a far away observer) as objects approach the event horizon. It doesn't matter if the black hole is small or big...it slows down by the same amount, the only question is where. The event horizon has a larger radius when it's big, and it has a smaller radius when it's small.

In both cases, from the perspective of a far away observer NOTHING ever crosses the event horizon, whether the black hole is small or big. It slows down as it approaches that point, and at the event horizon itself, time stops completely, so it will freeze there for eternity. You won't be able to see that, instead you see the light that it emits being redshifted as it has to climb the black hole's gravity well, eventually becoming too red-shifted to be detected, and it's effectively black.

In both cases, from the perspective of the object falling in, time is passing, and it crosses the event horizon without even knowing that it's there. Well, for a very large black hole, it doesn't notice anything, for a very small black hole, tidal effects cause spaghettification before crossing the event horizon, so it's going to notice something and have a bad time. But it won't be the event horizon, it's just the difference in the force of gravity across the length of the object.

So, the reason it slows down consumption is not related to the time dilation. Using your terms, "it's almost" as physicists spend their lives studying these things, and therefore if it seems obvious to the the layman reading a slashdot article, they've already considered it and either accepted, dismissed it, or tested it.

Comment Re:Working with other people's code (Score 0) 150

Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.

Comment Re: sure (Score 2) 150

But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.

That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.

Comment Re:Please don't use Paramount+ Platform (Score 3, Interesting) 55

(+1, Truth)

Of all the major streaming platforms, Paramount+ stands alone in how often it just doesn't work. It doesn't work reliably on state-of-the-art streaming boxes. It doesn't work reliably on desktop PCs. In fact, of all the devices we have in our household, it works reliably on a total of zero of them.

We have several of the other commercial streaming platforms plus the apps or online services for several of our main national TV channels as well and almost all of them work almost all of the time. It's bizarre how bad Paramount+ manages to be compared to literally everyone else. It must be hurting their bottom line to some degree or surely will do soon if they don't get a handle on it, because why pay for something you literally can't watch?

Comment Re: Wow, scary (Score 1) 84

It isn't like this is an accidental attitude, that very company has been spamming us with advertising telling us pretty much how infallible they are for some months now.

That is an accidental attitude. I don't even understand what you're trying to imply here.

The default assumption, by literally everyone, is that if it's in an ad, it's not a statement to be trusted. Ads are *by nature* untrustworthy, they are a biased view meant to get you to be interested in the product. It's up to the person with the wallet to then do actual research, and they are literally the only person to blame if they trust the ad. If the ads were telling you the limitation of the product, then the person to blame would be the marketing team that created the ad, they should be fired for incompentence.

If the government is depending on ads to evaluate the capabilities of the AI, that's where you should focus the outrage. If the ads were in any way saying that Claude isn't capable of doing anything including making you breakfast and turning you into a stud that all women want, then your outrage should be with the terrible marketing team that decided that their competition deserves market share.

Comment Re:Fuck this administration (Score 4, Informative) 393

We don't have a king, except in the minds of the TDS afflicted.

Ok. The founding fathers didn't want the President of the United States to have ANY POWERS to make any decisions inside the country. The goal was for the President to merely be the administrative head to enforce laws Congress pass, and its only check on Congress was the veto power. The President also served as a Commander in Chief and had the power to sign treaties with foreign governments, but those powers were meant to be EXTREMELY limited, as they gave only Congress the power to declare war, and Congress was required to ratify any treaties with foreign governments.

If the President has the power to make ANY DECISIONS WHATSOEVER, instead of enforcing decisions those in congress have made, then it's not the role the founding fathers wanted.

They also wanted the executive to be very neutral. Many of them were against the concept of political parties, but that turned out to be inevitable. However, up until the 12th amendment, the vice-president was the runner up, whoever got the second-most votes by the electoral college. So, under that system, Hillary would have been Trump's VP his first term, and Harris would have been Trump's VP his second term. Because they wanted to ensure a check even within the executive, with someone with different views being the one to break ties in the senate.

Comment Re: Interesting Summary (Score 1) 58

There's a difference between not using AI tools at all and not using code generated by AIs.

The latter involves a lot of risks that aren't well understood yet -- some technical, some legal, some ethical -- and it's entirely possibly that some of those risks are going to blow up in the face of the gung-ho adopters with existential consequences for their businesses.

I mostly work with clients in industries where quality matters. Think engineering applications where equipment going wrong destroys things or kills people and where security vulnerabilities are a proxy for equipment going wrong.

I know plenty of smart, capable people working in this part of the industry who are totally fine with blanket banning the use of AI-generated code on these jobs. A lot of that code simply isn't up to the required standards anyway, but even if it does produce something you could actually use, there are still all the same costs for review and certification that any other code incurs. That includes the need for at least one human reviewer to work out why the AI wrote what it did, which may or may not have any better answer than "statistically, it seemed like a good idea at the time".

Comment Re:Interesting Summary (Score 2) 58

The claims also seem a bit sus. "Eighty percent of new developers on GitHub use Copilot within their first week." Is this the same statistic someone was debunking recently where anyone who had done something really basic (it might have been using the search facility?) was counted as "using Copilot"? A lot of organisations seem to be cautious about using code generated by AIs, or even imposing a blanket ban, so things must be very different in other parts of the industry if that 80% is also representative of professional developers using Copilot significantly for real work.

Comment Re:Paywall free link (Score 1) 151

The military is right.

The military is right. As in, the military is saying Anthropic's tools are the best there are, and they don't want to change. Pete Hegseth is wrong, and he's throwing a hissy fit that, as usual, goes against what the people who now have to follow his orders, but are way more qualified than he is, actually want to do.

The entire value of AI for them is decision speed.

Incorrect. It's important that the decision be the *best decision*. Speed is a factor, but it's not the most important one. I can give you a system that gives you decisions faster than any AI, just have it choose randomly instead of actually analyzing any data, and it will be very fast!

What Anthropic is concerned about is that they are not confident their AI system can make decisions like what to shoot at with a low enough error rate to justify doing so. Anthropic is understandably concerned about the blowback to *them* when they become the scapegoat for all our drones engaging in friendly fire and killing a bunch of Americans, because Hegseth decided to trust a system that if you ask it, "the carwash is only 100m from my house, should I drive or walk there to wash my car" will say that you should walk there, because it's so close. You really want *that* system making the decision on who to kill?

I'm a pragmatist. I *know* eventually humans will be out of the loop in such decisions. We're very, very, VERY far from that. We know it, the military knows it, ALL the AI vendors, out of which Anthropic currently has the best product, know this. Pete Hegseth is apparently too incompetent to know this.

The second part of the equation, AI is actually pretty good at. It's a great tool for sifting through massive data, so it's great in helping to spy against Americans. No patriot should want that, however. Anthropic is ok with it being used to spy on other countries, but understandably does not want that use to spy on our own citizens. If you're against that, fuck you, you have no right to call yourself an American, you don't have the very basic values that this country stands for.

Slashdot Top Deals

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...