Forgot your password?
typodupeerror

Comment Re:Apply Betteridge's Law (Score 1) 49

So, no, this cluster of patches doesn't tell us anything in particular beyond what we already knew: That emergency patches are relatively common.

Considering that Microsoft has been promising this exact same type of improvement since the release of XP Service Pack 3, the words spoken now are worthless platitudes provided to ensure the smoothness of the theft of your money. There is zero reality behind any of their promises.

I'm just talking about statistical patterns. I know little about Microsoft patches. I abandoned Windows in 2001, right around the time XP was released, and have never looked back.

Comment Re:25,000 lines of code (Score 1) 76

The LLM and the compiler and the formatter will get the low-level details right.

Maybe in about 90% if you are lucky. That still leaves about 10% error rate which is way too much.

Not remotely similar to my experience. Granted I'm writing Rust, and the Rust compiler is *really* picky, so by the time the agent gets something that compiles it's a lot closer to correct than in other languages. Particularly if you know how to use the type system to enforce correctness.

Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases,

Depends on the definition of "bases". Passing test suite does not show your program correct. And if your test suite is also AI generated then you are again at the problem whether the tests themselves are correct.

Yes, you have to know how to write tests. A few decades of experience helps a lot. I find I actually spend a lot more time focused on the details of APIs and data structures than the details of tests, though. Getting APIs or data structures wrong will cost you down the road.

Also, I suppose it helps a bit that my work is in cryptography (protocols, not algorithms). The great thing about crypto code is that if you get a single bit wrong, it doesn't work at all. If you screw up the business logic just a little bit, you get completely wrong answers. The terrible thing is that if you get a single bit wrong, it doesn't work at all and gives you no clue where your problem might be.

Of course that's just functional correctness. With cryptography, the really hard part is making sure that the implementation is actually secure. The AI can't help much with that. That requires lots of knowledge and lots of experience.

and then to scan the code for anomalies that make your antennas twitch,

Vibe error detection goes nicely with vibe programming. That being said, experienced programmers have a talent to detect errors. But detecting some errors here and there is far from full code review. Well, you can ask LLM to do it as well and many proposals it provides are good. Greg Kroah-Hartman estimates about 2/3 are good and the rest is marginally somewhat usable.

Deep experience is absolutely required. My antennas are quite good after 40 years.

then dig into those and start asking questions -- not of product managers and developers, usually, but of the LLM!

Nothing goes as nicely as discussing with LLM. The longer you are at it the more askew it goes.

You really have to know what questions to ask, and what answers not to accept. It also helps to know what kinds of errors the LLM makes. It never outright lies, but it will guess rather than look, so you have to know when and how to push it, and how to manage its context window. When stuff starts falling out of the context window the machine starts guessing, approximating, justifying. Sometimes this means you need to make it spawn a bunch of focused subagents each responsible for a small piece of the problem. There are a lot of techniques to learn to maximize the benefit and minimize the errors.

My point is that 25k LOC a month (god forbid a week) is a lot. It may look working on the outside but it is likely full of hopefully only small errors. Especially when you decide that you do not need to human-review all the LLM generated code. But if you consider e.g. lines of an XML file defining your UI (which you have drawn in some GUI designer) to be valid LOC then yeah. 25k is not a big deal. Not all LOCs are equal.

Yeah, I am definitely not doing UI work.

Comment Re:25,000 lines of code (Score 1) 76

its during those sprints when I'm pumping out thousands of lines per day that I write the code that turns out to be the highest quality, requiring the fewest number of bugfixes later

yeah, all of us write (or copy/paste) great boilerplate code. that's not really something to be proud of.

we all make mistakes when writing business functions which are never 25k LOC in a week.

Speak for yourself. I wrote Android's Keymaster implementation in less than a month, and it was about that size, and then re-wrote most of it in a week when it turned out I'd made some core assumptions that Qualcomm couldn't match in their implementation. It was relatively bug-free for a decade -- even when a third-party security research lab spent a month scrutinizing it. They found a handful of things, but nothing serious. I was amazed, especially since I'd seen the reports they turned in on some other code.

That's just one example. In my nearly 40-year career I've had a half dozen crazy-productive weeks like that, and usually when working on particularly-complex bits. If you haven't had that experience, that's unfortunate. It's not something I could do frequently (or would want to), but it's a glorious feeling when you're that deep in the zone.

Comment Re:Doing the editor's job. (Score 3, Informative) 37

Relativity = gravity is represented by the curvature of spacetime. Curvature is linear, R. The formula treats curvature linearly. As things get closer and curvature spikes, the math just scales at a 1:1 rate

Quadratic gravity = Squares the curvature. Doesn't really change things much when everything is far apart, but heavily changes things when everything is close together.

Pros: prevents infinities and other problems when trying to reconcile quantum theory with relativity ("makes the theory renormalizable"). E.g. you don't want to calculate "if I add up the probabilities of all of these possible routes to some specific event, what are the odds that it happens?" -> "Infinity percent odds". That's... a problem. Renormalization is a trick for electromagnetism that prevents this by letting the infinities cancel out. But it doesn't work with linear curvature - gravitons carry energy, which creates gravity, which carries more energy... it explodes, and renormalization attempts just create new infinities. But it does work with quadratic curvature - it weakens high-energy interactions and allows for convergence.

Cons: Creates "ghosts" (particles with negative energies or negative probabilities, which create their own problems). There's various proposed solutions, but none that's really a "eureka!" moment. Generally along the lines of "they exist but are purely virtual and don't interact", "they exist but they're so massive that they decay before they can interact with the universe", "they don't exist, we're just using the math out of bounds and need a different representation of the same", "If we don't stop at R^2 but also add in R^3, R^4, ... on to infinity, then they go away". Etc.

The theory isn't new, BTW. The idea is from 1918 (just a few years after Einstein's theory of General Relativity was published), and the work that led to the "Pros" above is from 1977.

Comment Re:And media selection of alarmist data (Score 4, Interesting) 38

A bit more about the latter. Beyond organophosphates, the main other alternative is pyrethroids. These are highly toxic to aquatic life, and they're contact poisons to pollinators just landing on the surface (some anti-insect clothing is soaked in pyrethrin for its effect). Also, neonicotinoids are often applied as seed coatings (which are taken up and spread through the plant), which primarily just affect the plant itself. Alternatives are commonly foliar sprays. This means drift to non-target impacts as well, such as in your shelterbelts, private gardens, neighbors' homes, etc. You also have to use far higher total pesticide quantities with foliar sprays instead of systematics, which not only drift, but also wash off, etc. Neonicotinoids can impact floral visitors, with adverse sublethal impacts but e.g. large pyrethroid sprayings can cause massive immediate fatal knockdown events of whole populations of pollinators.

Regrettable substitution is a real thing. We need to factor it in better. And that applies to nanoplastics as well.

Comment Re:And media selection of alarmist data (Score 4, Interesting) 38

So, when we say microplastics, we really mainly mean nanoplastics - the stuff made from, say, drinking hot liquids from low-melting-point plastic containers. And yeah, they very much look like a problem. The strongest evidence is for cardiovascular disease. The 2024 NEJM study for example found that for patients with above-threshold levels of nanoplastics in cartoid artery plaque were 4,5x more likely to suffer from a heart attack. Neurologically, they cross the brain-blood barrier (and quite quickly). A 2023 study found that they cause alpha-synuclein to misfold and clump together, a halmark of Parkinsons and various kinds of dementia. broadly, they're associated with oxidative stress, neuroinflammation, protein aggregation, and neurotransmitter alterations. Oxidative stress is due to cells struggling to break down nanoplastics in them. They're also associated with immunotoxicity, inflammatory bowel disease, and reproductive dysfunction, including elevating inflammatory markers, impairing sperm quality, and modulating the tumor microenvironment. With respect to reproduction, they're also associated with epigenetic dysregulation, which can lead to heritable changes.

And here's one of the things that get me - and let me briefly switch to a different topic before looping back. All over, there's a rush to ban polycarbonate due to concerns over a degradation product (bisphenol-A), because it's (very weakly) estrogenic. But typical effective estrogenic activity from typical levels of bisphenol-A are orders of magnitude lower than that of phytoestrogens in food and supplements; bisphenol-A is just too rare to exert much impact. Phytoestrogens have way better PR than bisphenol-A, and people spend money buying products specifically to consume more of them. Some arguments against bisphenol-A focus on what type of estrogenic activity it can promote (more proliferative activity), but that falls apart given that different phytoestrogens span the whole gamut of types of activation. Earlier research arguing for an association with estrogen-linked cancer seems to have fallen apart in more recent studies. It does seem associated with PCOS, but it's hard to describe it as a causal association, because PCOS is associated with all sorts of things, including diet (which could change the exposure rate vs. non-PCOS populations) and significant hormonal changes (which could change the clearance rate of bisphenol-A vs. non-PCOS populations). In short, bisphenol-A from polycarbonate is not without concern, but the concern level seems like it should be much lower than with nanoplastics.

Why bring this up? Because polycarbonate is a low-nanoplastic-emitting material. It is a quite resilient, heat tolerant plastic, and thus - being much further from its glass transition temperature - is not particularly prone to shedding nanoplastics. By contrast, its replacements - polyethylene, polypropylene, polyethylene terephthate, etc - are highly associated with nanoplastic release, particularly with hot liquids. So by banning polycarbonate, we increase our exposure to nanoplastics, which are much better associated with actual harms. And unlike bisphenol-A, which is rapidly eliminated from the body, nanoplastics persist. You can't get rid of them. If some big harm is discovered with bisphenol-A that suddenly makes the risk picture seem much bigger than with nanoplastics, we can then just stop using it, and any further harm is gone. But we can't do that with nanoplastics.

People seriously need to think more about substitution risks when banning products. The EU in particular is bad about not considering it. Like, banning neonicotinoids and causing their replacement by organophosphates, etc isn't exactly some giant win. Whether it's a benefit to pollinators at all is very much up in the air, while it's almost certain that the substitution is more harmful for mammals such as ourselves (neonicotinoids have very low mammalian toxicity, unlike e.g. organophosphates, which are closely related to nerve agents).

Comment Re:25,000 lines of code (Score 1, Interesting) 76

It might take one person one year to write 25k lines.

A year? I've regularly written that much in a month, and sometimes in a week. And, counter-intuitively, its during those sprints when I'm pumping out thousands of lines per day that I write the code that turns out to be the highest quality, requiring the fewest number of bugfixes later. I think it's because that very high productivity level can only happen when you're really in the zone, with the whole system held in your head. And when you have that full context, you make fewer mistakes, because mistakes mostly derive from not understanding the other pieces your code is interacting with.

Of course, that kind of focus is exhausting, and you can't do it long term.

How does a person get their head around that in 15 hours?

By focusing on the structure, not the details. The LLM and the compiler and the formatter will get the low-level details right. Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases, and then to scan the code for anomalies that make your antennas twitch, then dig into those and start asking questions -- not of product managers and developers, usually, but of the LLM!

But, yeah, it is challenging -- and also strangely addictive. I haven't worked more than 8 hours per day for years, but I find myself working 10+ hours per day on a regular basis, and then pulling out the laptop in bed at 11 PM to check on the last thing I told the AI to do, mostly because it's exhilarating to be able to get so much done, at such high quality, so quickly.

Comment Re:Was not expecting them to admit that (Score 1) 58

They had to say it that way, because the more accurate statement is that the dealership law unfairly advantages existing automakers.

Even the entrenched automakers don't want dealerships to exist, they would all prefer to sell directly. They have better ways to keep down competition at the federal level. Dealerships just take a cut of what they could be keeping all of if they didn't exist.

That's a valid point, though right now while they're facing competition from startups the dealerships do provide them with a moat that they want to preserve. If/when the startup threat is gone, the automakers will go back to hating the dealerships.

I think people forget how everyone laughed at Tesla because everyone knew that starting a new car company in the United States was impossible. Now we also have Lucid and Rivian. Maybe someday Aptera will manage to get off the ground. This is a novel situation for American carmakers.

Comment Re:Was not expecting them to admit that (Score 4, Informative) 58

>arguing it unfairly advantages startups

Way to say your dealers suck.

They had to say it that way, because the more accurate statement is that the dealership law unfairly advantages existing automakers. It's not about the dealerships being good or bad, it's about the fact that setting up a dealership network takes a lot of time and money and requiring it is a good way to keep new competition out.

Comment Re:The old guard bribed these restrictions (Score 4, Interesting) 58

into place to protect their oligopoly. Some blame it on "socialism" when it's really crony capitalism.

The correct term is "regulatory capture". Private businesses use the power of the state to protect, subsidize or otherwise benefit them and harm competitors and potential competitors. It's extremely common and the more pervasive the regulation is, the more common it is. Red tape and government procedures benefit entrenched players who have built the institutional structures and knowledge to deal with them.

This isn't to say that all regulation is bad... but a lot of it is. There was never any consumer benefit to banning direct sales. All regulations should be thoroughly scrutinized for their effects on the market, direct and indirect.

Comment Re:Good but they 'summarized' al the science. (Score 3, Insightful) 65

Anything that wasn't action, drama, or comedy was largely dropped and almost all of the science was quick summary explanations.

I think that's necessary. Providing explanations of depth comparable to the book would require a 10-hour movie. Squeezing the story down to feature length requires cutting a lot of exposition. In many books there's a lot of description that can be replaced with visuals, but it's pretty hard to do that with a lot of the science.

Comment Re:LLMs can't explain themselves (Score 1) 41

One issue with the overall architecture (which is just statistical prediction) is that it can't really provide useful insights on why it did what it did.

I think you're describing the models from a year ago. Most of the improvements in capability since then (and the improvements have been really large) are directly due to changes that have the AI model talk to itself to better reason out its response before providing it, and one of the results of that is that most of the time they absolutely can explain why they did what they did. There are exceptions, but they are the exception, not the rule.

It's interesting to compare this with humans. Humans generally can give you an explanation for why they did what they did, but research has demonstrated pretty conclusively that a large majority of the time those explanations are made up after the fact, they're actually post-hoc justifications for decisions that were made in some subconscious process. Researchers have demonstrated that people are just as good at coming up with explanations for decisions they didn't make as for decisions they did! The bottom line is that people can't really provide useful insights on why they did what they did, they're just really good at inventing post-hoc rationales.

Slashdot Top Deals

Make it myself? But I'm a physical organic chemist!

Working...