Forgot your password?
typodupeerror

Comment Re:Got some questions (Score 1) 37

IIUC there are two different kinds of things that are called "gravity waves" in quantum physics by those who aren't experts in the field. One of those is undetectable, and the other is what we've been detecting. (I'm no expert, so I can't clarify that.) There's also something called "gravity waves" in fluid dynamics, and that's definitely detectable.

Comment Re:Doing the editor's job. (Score 4, Informative) 37

Relativity = gravity is represented by the curvature of spacetime. Curvature is linear, R. The formula treats curvature linearly. As things get closer and curvature spikes, the math just scales at a 1:1 rate

Quadratic gravity = Squares the curvature. Doesn't really change things much when everything is far apart, but heavily changes things when everything is close together.

Pros: prevents infinities and other problems when trying to reconcile quantum theory with relativity ("makes the theory renormalizable"). E.g. you don't want to calculate "if I add up the probabilities of all of these possible routes to some specific event, what are the odds that it happens?" -> "Infinity percent odds". That's... a problem. Renormalization is a trick for electromagnetism that prevents this by letting the infinities cancel out. But it doesn't work with linear curvature - gravitons carry energy, which creates gravity, which carries more energy... it explodes, and renormalization attempts just create new infinities. But it does work with quadratic curvature - it weakens high-energy interactions and allows for convergence.

Cons: Creates "ghosts" (particles with negative energies or negative probabilities, which create their own problems). There's various proposed solutions, but none that's really a "eureka!" moment. Generally along the lines of "they exist but are purely virtual and don't interact", "they exist but they're so massive that they decay before they can interact with the universe", "they don't exist, we're just using the math out of bounds and need a different representation of the same", "If we don't stop at R^2 but also add in R^3, R^4, ... on to infinity, then they go away". Etc.

The theory isn't new, BTW. The idea is from 1918 (just a few years after Einstein's theory of General Relativity was published), and the work that led to the "Pros" above is from 1977.

Comment Re:Overblown (Score 1) 37

It depends on your time horizon. The predictions aren't *currently* testable. Testing them depends on building new tools.

I sort of don't like it, because I don't really accept continuity, but I've no evidence that my feeling is correct. (I expect things to break down before one gets to 10^-33 cm. But that's because of something Wheeler speculated about in the 1980's.)

Comment Re:And media selection of alarmist data (Score 4, Interesting) 40

A bit more about the latter. Beyond organophosphates, the main other alternative is pyrethroids. These are highly toxic to aquatic life, and they're contact poisons to pollinators just landing on the surface (some anti-insect clothing is soaked in pyrethrin for its effect). Also, neonicotinoids are often applied as seed coatings (which are taken up and spread through the plant), which primarily just affect the plant itself. Alternatives are commonly foliar sprays. This means drift to non-target impacts as well, such as in your shelterbelts, private gardens, neighbors' homes, etc. You also have to use far higher total pesticide quantities with foliar sprays instead of systematics, which not only drift, but also wash off, etc. Neonicotinoids can impact floral visitors, with adverse sublethal impacts but e.g. large pyrethroid sprayings can cause massive immediate fatal knockdown events of whole populations of pollinators.

Regrettable substitution is a real thing. We need to factor it in better. And that applies to nanoplastics as well.

Comment Re: 25,000 lines of code (Score 1) 76

You assume that a standards document exists and is also sufficiently specific for all scenarios. Other than some very fundamental IETF stuff have I seen a standards document that pretty much covers the scope specifically. Even more severely, "specifications" for an internal project have been so traditionally bad, a whole methodology cropped up basically saying that getting specifications that specifically correct is a waste of time because during the coding it will turn out to not be workable.

Yes, it can write hundreds of tests, but if the same mediocre engine that can't code it right is also generating tests, the tests will be mediocre. Leading to bizarre things like a test case to make sure '1234' comes back as 'abcd' and the function just always returns the fixed string 'abcd' and passes the test because it decided to make a test and pass it instead of trying to implement the logic. I have seen people almost superstitiously add to a prompt "and test everything to make sure it's correct" and declare "that'll fix the problems". The superstitious prompting is a big problem in my mind, that people think they add a magic phrase and suddenly the LLM won't make the mistakes LLMs tend to make. I have seen people take an LLM at their word when the LLM "promises" to not make a specific mistake, and then confounded the first time they hit the LLM making the mistake anyway. "It specifically said it wouldn't do that!", it doesn't understand promises, the thing just will generate the 'consistent' followup to a demand for a promise which is text indicating making the promise.

Take the experiment where they took Opus 4.6 and made it produce a C compiler. To do so, the guy at Anthropic said point blank he had to invest a great deal of effort in a test harness, that the process needed an already working gcc to use as a reference on top of that, and specified the end game as a bootable, compiled kernel. Even then he had to intervene to fix it and it couldn't do the whole thing and when people reviewed the published result, it failed to compile other valid code and managed to compile things that shouldn't have been compilable. This is Anthropic with their best model doing a silly stunt to create a knock off of an existing open source project with full access to said project and source code and *still* it being a lot of human work for mediocre output.

Yes, it has utility, but there's a lot of people overestimating capabilities and underestimating risks and it's hard for the non-technical decision makers to tell the difference until much further down the line. Mileage varies greatly depending on the nature of the task at hand as to whether LLM is barely useful at all or it can credibly almost generate the whole thing.

Comment Re:And media selection of alarmist data (Score 4, Interesting) 40

So, when we say microplastics, we really mainly mean nanoplastics - the stuff made from, say, drinking hot liquids from low-melting-point plastic containers. And yeah, they very much look like a problem. The strongest evidence is for cardiovascular disease. The 2024 NEJM study for example found that for patients with above-threshold levels of nanoplastics in cartoid artery plaque were 4,5x more likely to suffer from a heart attack. Neurologically, they cross the brain-blood barrier (and quite quickly). A 2023 study found that they cause alpha-synuclein to misfold and clump together, a halmark of Parkinsons and various kinds of dementia. broadly, they're associated with oxidative stress, neuroinflammation, protein aggregation, and neurotransmitter alterations. Oxidative stress is due to cells struggling to break down nanoplastics in them. They're also associated with immunotoxicity, inflammatory bowel disease, and reproductive dysfunction, including elevating inflammatory markers, impairing sperm quality, and modulating the tumor microenvironment. With respect to reproduction, they're also associated with epigenetic dysregulation, which can lead to heritable changes.

And here's one of the things that get me - and let me briefly switch to a different topic before looping back. All over, there's a rush to ban polycarbonate due to concerns over a degradation product (bisphenol-A), because it's (very weakly) estrogenic. But typical effective estrogenic activity from typical levels of bisphenol-A are orders of magnitude lower than that of phytoestrogens in food and supplements; bisphenol-A is just too rare to exert much impact. Phytoestrogens have way better PR than bisphenol-A, and people spend money buying products specifically to consume more of them. Some arguments against bisphenol-A focus on what type of estrogenic activity it can promote (more proliferative activity), but that falls apart given that different phytoestrogens span the whole gamut of types of activation. Earlier research arguing for an association with estrogen-linked cancer seems to have fallen apart in more recent studies. It does seem associated with PCOS, but it's hard to describe it as a causal association, because PCOS is associated with all sorts of things, including diet (which could change the exposure rate vs. non-PCOS populations) and significant hormonal changes (which could change the clearance rate of bisphenol-A vs. non-PCOS populations). In short, bisphenol-A from polycarbonate is not without concern, but the concern level seems like it should be much lower than with nanoplastics.

Why bring this up? Because polycarbonate is a low-nanoplastic-emitting material. It is a quite resilient, heat tolerant plastic, and thus - being much further from its glass transition temperature - is not particularly prone to shedding nanoplastics. By contrast, its replacements - polyethylene, polypropylene, polyethylene terephthate, etc - are highly associated with nanoplastic release, particularly with hot liquids. So by banning polycarbonate, we increase our exposure to nanoplastics, which are much better associated with actual harms. And unlike bisphenol-A, which is rapidly eliminated from the body, nanoplastics persist. You can't get rid of them. If some big harm is discovered with bisphenol-A that suddenly makes the risk picture seem much bigger than with nanoplastics, we can then just stop using it, and any further harm is gone. But we can't do that with nanoplastics.

People seriously need to think more about substitution risks when banning products. The EU in particular is bad about not considering it. Like, banning neonicotinoids and causing their replacement by organophosphates, etc isn't exactly some giant win. Whether it's a benefit to pollinators at all is very much up in the air, while it's almost certain that the substitution is more harmful for mammals such as ourselves (neonicotinoids have very low mammalian toxicity, unlike e.g. organophosphates, which are closely related to nerve agents).

Slashdot Top Deals

"Virtual" means never knowing where your next byte is coming from.

Working...