Forgot your password?
typodupeerror

Comment Re:Java hasn't been in the browser for 10+ years (Score 1) 33

Loading a webpage shouldn't bog down a $4000 MacBook Pro...but the shitty front-end dev community said "M4 should easily be able to load my stupid and simple website?"...."Challenge accepted!"

Does it actually bog down a reasonably-speced computer? I don't think it does, I think the sluggishness is just from the sheer volume of stuff that has to be downloaded, and the inefficient way it's downloaded. And the reason the web devs don't notice the awfulness is (a) their browsers have 98% of it cached and (b) they have a GigE (or 10 GigE) connection to the server. They certainly don't have computers faster than your M4.

Comment Re:Needs to be optional (Score 1) 33

As long as I can turn it off, I don't give a rat's ass what stupid, annoying, and bandwidth-eating "features" they put into Chrome.

I think you didn't understand what this feature is. It's pretty much the opposite of annoying, and it has no effect at all on bandwidth consumption. Though I suppose when devs get used to their sites seeming to load faster they'll bloat them up even more...

Comment Re:Bad for us, but not "our fault" (Score 1) 107

The real reason we will never be able to "fix" the drought is because the American West is not in a drought right now.

Basically everyone who lives in the area or studies the climate or hydrology would tell you that you're insane.

The West's rapid aridification isn't being caused by a "once-in-a-century" weather event

More like a once-in-a-millennium event. Though I suspect it's going to be considerably more common going forward.

What we're dealing with in the West is not a drought because the current lack of rainfall isn't "abnormal" for a desert. Dry is the default setting. And you can't call it a "drought" because you wish deserts were wetter.

Deserts have some amount of normal precipitation, too. And when you get a lot less than normal, that's called a drought. Yes, even in a desert.

Comment Re:Watch, Nerds! (Score 2) 101

Each time some nerd says "Let them censor I have a VPN" he forgets that the next step is to crackdown on VPNs. Technical defenses against political problems only give you a bit of time, but will eventually fail.

Even worse is when they compromise the VPN operators and then monitor your usage until you do something that makes them decide to crack down on you.

People erroneously think of VPNs as privacy protectors. They aren't, not unless you have very good reason to trust whoever is running the server. If you don't, then they're concentrators for likely subversive traffic and its origins.

Comment Re:The God-fearing and the Accountants (Score 1) 162

This is one case where the sky daddy freaks could be useful to stop an extremely dangerously stupid move "forward." Because we live in this world, in this time, if this goes forward, it will 100% be used to extend the lives of the ultra-rich, while the rest of us remain fodder for their machinations.

Meh.

It would undoubtedly be very expensive at first, and therefore only available to the very wealth (probably not ultra-wealthy -- even without automation, caring for such a clone wouldn't be a full-time job, so call it maybe $30k/year -- within the reach of the upper middle class). But competition would drive automation, and we already have most of the techniques required, having developed them to deal with coma patients and the like, but at lower cost because this case would be dealing with a fundamentally healthy body. My guess based on some napkin math is that cost could be driven down as low as $10k per year. Maybe lower.

$10k per year is expensive, sure, but having an immunologically-perfect organ donor could absolutely be worth it for someone making as little as $200k per year.

If the cost could be driven down to $5k per year... then it's in the range where most middle-class Americans could afford it, even if it meant that they'd have to cut back a little somewhere else; maybe drive an older car rather than leasing a new one, or similar.

Comment Re:Here it comes (Score 1) 70

You're confusing the importance of avoiding Kessler syndrome in LEO with the difficulty of causing Kessler syndrome. GEO debris can potentially remain there for millions of years before interactions between the gravitational pull of the Sun, Earth, and Moon sufficiently perturb it. LEO debris remains for weeks to months. You have to have many orders of magnitude more debris in LEO to trigger Kessler Syndrome, where the rate of collisions exceeds the rate of debris loss.

The fact that a LEO Kessler Syndrome would also be short is something that exists on top of that.

It's also worth nothing that not only are modern satellites not only vastly better at properly disposing of themselves than they were in the 1970s when Kessler Syndrome was proposed, but they're also vastly better at avoiding debris strikes. All of these factors are multiplicative together.

Comment Re:Here it comes (Score 3, Insightful) 70

People forget that the primary concerns about Kessler Syndrome were about geosynchronous orbit, which used to be where all the most important satellites went (many of course still go there, but not the megaconstellations). It takes a long, long time for debris to leave GEO. But LEO is a very different beast.

Comment Re:Here it comes (Score 4, Informative) 70

Yeah. In particular:

with fragments likely to fall to Earth over the next few weeks

LEO FTW. Kessler Syndrome is primarily a risk if you put too much stuff with too poor of an end-of-life disposal rate in GEO. End-of-life without proper disposal rates have declined exponentially since Kessler Syndrome was first proposed (manufacturers both understand the importance more, and do a better job, of decreasing the rate of failures before deorbit - in the past, sometimes there wasn't even attempts to dispose of a craft at end-of-life). And now we're increasingly putting stuff in LEO, where debris falls out of orbit relatively quickly. It's not impossible in LEO, esp. with higher LEO orbits - but it's much more difficult.

Or to put it another way: fragments can't build up to hit other things if they're gone after just a couple weeks.

And this trend is likely to continue - a lower percentage of premature failures, and decreasing altitudes / reentry times. Concerning ever-decreasing altitudes, we've already been doing this via use of ion engines to provide more reboost (with mission lifespans designed for only several years before running out of propellant, instead of decades like the giant GEO ones), but there's an increasing interest in "sky skimming" satellites that function in a way somewhat reminiscent of a ramjet - instead of krypton or xenon as the propellant for an ion engine, the sparse atmospheric air itself is the propellant, so the craft can in effect fly indefinitely until it fails, wherein it quite rapidly enters the denser atmosphere and burns up.

Comment Re:Apply Betteridge's Law (Score 1) 49

So, no, this cluster of patches doesn't tell us anything in particular beyond what we already knew: That emergency patches are relatively common.

Considering that Microsoft has been promising this exact same type of improvement since the release of XP Service Pack 3, the words spoken now are worthless platitudes provided to ensure the smoothness of the theft of your money. There is zero reality behind any of their promises.

I'm just talking about statistical patterns. I know little about Microsoft patches. I abandoned Windows in 2001, right around the time XP was released, and have never looked back.

Comment Re:25,000 lines of code (Score 1) 78

The LLM and the compiler and the formatter will get the low-level details right.

Maybe in about 90% if you are lucky. That still leaves about 10% error rate which is way too much.

Not remotely similar to my experience. Granted I'm writing Rust, and the Rust compiler is *really* picky, so by the time the agent gets something that compiles it's a lot closer to correct than in other languages. Particularly if you know how to use the type system to enforce correctness.

Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases,

Depends on the definition of "bases". Passing test suite does not show your program correct. And if your test suite is also AI generated then you are again at the problem whether the tests themselves are correct.

Yes, you have to know how to write tests. A few decades of experience helps a lot. I find I actually spend a lot more time focused on the details of APIs and data structures than the details of tests, though. Getting APIs or data structures wrong will cost you down the road.

Also, I suppose it helps a bit that my work is in cryptography (protocols, not algorithms). The great thing about crypto code is that if you get a single bit wrong, it doesn't work at all. If you screw up the business logic just a little bit, you get completely wrong answers. The terrible thing is that if you get a single bit wrong, it doesn't work at all and gives you no clue where your problem might be.

Of course that's just functional correctness. With cryptography, the really hard part is making sure that the implementation is actually secure. The AI can't help much with that. That requires lots of knowledge and lots of experience.

and then to scan the code for anomalies that make your antennas twitch,

Vibe error detection goes nicely with vibe programming. That being said, experienced programmers have a talent to detect errors. But detecting some errors here and there is far from full code review. Well, you can ask LLM to do it as well and many proposals it provides are good. Greg Kroah-Hartman estimates about 2/3 are good and the rest is marginally somewhat usable.

Deep experience is absolutely required. My antennas are quite good after 40 years.

then dig into those and start asking questions -- not of product managers and developers, usually, but of the LLM!

Nothing goes as nicely as discussing with LLM. The longer you are at it the more askew it goes.

You really have to know what questions to ask, and what answers not to accept. It also helps to know what kinds of errors the LLM makes. It never outright lies, but it will guess rather than look, so you have to know when and how to push it, and how to manage its context window. When stuff starts falling out of the context window the machine starts guessing, approximating, justifying. Sometimes this means you need to make it spawn a bunch of focused subagents each responsible for a small piece of the problem. There are a lot of techniques to learn to maximize the benefit and minimize the errors.

My point is that 25k LOC a month (god forbid a week) is a lot. It may look working on the outside but it is likely full of hopefully only small errors. Especially when you decide that you do not need to human-review all the LLM generated code. But if you consider e.g. lines of an XML file defining your UI (which you have drawn in some GUI designer) to be valid LOC then yeah. 25k is not a big deal. Not all LOCs are equal.

Yeah, I am definitely not doing UI work.

Slashdot Top Deals

We cannot command nature except by obeying her. -- Sir Francis Bacon

Working...