Comment Re:Looks like a robotic arm on a rail (Score 1) 48
Gee, it is almost as if the U.S. government needs to encourage homegrown solar panels. If only the U.S. could find such a government.
Gee, it is almost as if the U.S. government needs to encourage homegrown solar panels. If only the U.S. could find such a government.
Reply "yes", then close and reopen this message to activate the link.
No matter how idiot-proof you make technology, God will always create a better idiot. That's why the right way to solve this problem is:
You don't like Time Machine? I have hourly backups on one drive, and daily backups on a drive I store in a different location.
I love Time Machine (except for how slow it is over SMB and how often the disk images corrupt themselves in ways that prevent future backups). Wish it existed on iOS and VisionOS.
>arguing it unfairly advantages startups
Way to say your dealers suck.
They had to say it that way, because the more accurate statement is that the dealership law unfairly advantages existing automakers. It's not about the dealerships being good or bad, it's about the fact that setting up a dealership network takes a lot of time and money and requiring it is a good way to keep new competition out.
into place to protect their oligopoly. Some blame it on "socialism" when it's really crony capitalism.
The correct term is "regulatory capture". Private businesses use the power of the state to protect, subsidize or otherwise benefit them and harm competitors and potential competitors. It's extremely common and the more pervasive the regulation is, the more common it is. Red tape and government procedures benefit entrenched players who have built the institutional structures and knowledge to deal with them.
This isn't to say that all regulation is bad... but a lot of it is. There was never any consumer benefit to banning direct sales. All regulations should be thoroughly scrutinized for their effects on the market, direct and indirect.
The alien is so remotely unlike us that it's a little hard to believe it would have a thought system we could understand and communicate with.
I thought exactly the opposite. I think Rocky is far too "human". I didn't mind it, though, because a lot of the humor would be lost if Rocky was properly alien.
Anything that wasn't action, drama, or comedy was largely dropped and almost all of the science was quick summary explanations.
I think that's necessary. Providing explanations of depth comparable to the book would require a 10-hour movie. Squeezing the story down to feature length requires cutting a lot of exposition. In many books there's a lot of description that can be replaced with visuals, but it's pretty hard to do that with a lot of the science.
So far I don't see anything termendous besides money wasted.
I'm sure you'd have said the same about Y2K. It's a good thing that some people have more foresight.
Your no true Scotsman fallacy is showing you don't even know what a Scotsman looks like. Virtually 100% of patent holders sit on all their patents for the entire duration of the patent.
That's because virtually 100% of patent holders use their patents defensively.
waiting for the patented technology to be ingrained in the industry
Dolby actively used their patents and actively defended them. They created that technology and marketed it heavily. They didn't sit around and wait. Just because they make most of their money from licensing doesn't make them a patent troll any more than every university in the world is suddenly a patent troll by your definition.
You missed the part where they knowingly allowed a patent to become part of a published open standard and ignored it for an entire decade, *then* started going after violations.
Oh, actually, it's worse than that. Dolby acquired these patents from General Electric two years ago. So in this matter, they quite literally ARE patent trolls. They did nothing to create this technology, but rather bought the patents to enrich themselves by becoming a leech on the industry now that companies are abandoning their codecs in favor of codecs whose encoders don't involve royalties.
Yes, but using them offensively after sitting on them violates the doctrine of Laches.
This isn't offensive. By all accounts their licensed product has been taken without a license paid.
You obviously don't understand patent law terminology, so let me give you a refresher:
Suing multiple companies for violating a patent without getting sued first is the very definition of offensive use of a patent.
In effect, they sat on the patents so that people would end up depending on AV1
Congrats on falling into a vortex of ignorance. Headlines are fun to latch on to, especially useless ones likes Slashdot headlines. Dolby isn't suing Snapchat for AV1. Dolby is suing Snapchat for not paying HEVC license. AV1 is just caught up in as a listed example due to Snapchat's HEVC-AV1 transcoder being one of the infringing items on the docket.
Those are actually separate lawsuits. (See link above.) The AV1 lawsuit is suing to stop them from using AV1 and force them to use a Dolby-licensed codec. They're also suing a Chinese hardware maker over AV1 at the same time.
At this point, it would be entirely reasonable for a judge to declare that because they failed to act against AOMedia
That's not how the law works. AOMedia has infringed zero patents. You can't infringe a patent by creating an algorithm and publishing it online. If that were the case you may as well say the US Patent Office is infringing patents. Businesses using products infringe patents.
The hell you can't. Patent infringement occurs on creating an instance of an invention. The moment they create source code for the software (an instantiation of the patent), they have violated the patent. It doesn't have to be instantiated into hardware or used by a business to be a violation. The patent violations began when AOMedia distributed the first beta versions a decade ago. The original patent holder (GE) did not sue.
To be fair, the reference implementation may not have been directly created or distributed by AOMedia, in which case the same applies, but to whatever company actually created and distributed it. This is largely an unimportant detail.
Businesses using products *also* infringe patents, which IMO, is a bad thing, but that's a separate discussion.
they lost their right to sue AOMedia for damages in creating the patented technology
Literally no one is suing AOMedia.
You literally didn't understand what I said.
Patent exhaustion occurs when a product is sold by someone who has the right to sell something that violates a patent, which typically means that either they own the patent or they paid licensing fees. It prevents someone from then suing downstream customers. And there is a six-year statute of limitations on suing over a patent violation. What I'm arguing is that:
This is a legal theory. To my knowledge, it has never been tested in court, largely because companies do not do what Dolby is doing, suing companies for using open source reference implementations or their derivatives nearly a decade after their release. And it should be clear that this theory applies only to patents in the context of software.
Even re-architecting might not fix their problem. It depends upon how much their software people are relying upon bot generated code. Given their famous attention to detail, what's the likelihood that they are pushing out code they do not understand because "it worked"? The hardest bugs do not show up in test harnesses. So if they have built up a giant sticky wad of code they do not understand, there's no going through it all quickly if that is even possible. If they re-architect with the same software dependence on bots, they haven't really solved the underlying issue which is the way they build stuff.
One issue with the overall architecture (which is just statistical prediction) is that it can't really provide useful insights on why it did what it did.
I think you're describing the models from a year ago. Most of the improvements in capability since then (and the improvements have been really large) are directly due to changes that have the AI model talk to itself to better reason out its response before providing it, and one of the results of that is that most of the time they absolutely can explain why they did what they did. There are exceptions, but they are the exception, not the rule.
It's interesting to compare this with humans. Humans generally can give you an explanation for why they did what they did, but research has demonstrated pretty conclusively that a large majority of the time those explanations are made up after the fact, they're actually post-hoc justifications for decisions that were made in some subconscious process. Researchers have demonstrated that people are just as good at coming up with explanations for decisions they didn't make as for decisions they did! The bottom line is that people can't really provide useful insights on why they did what they did, they're just really good at inventing post-hoc rationales.
If you can't code worth a damn, then of course the AI is going to find a lot of "bugs"...
If you're asserting that Greg Kroah-Hartman can't code worth a damn, you might want to find out who he is and think again.
And the law of large numbers. Statistically, there will but patch clusters, the same way there are clusters of every other random-ish event. The fact that one happens to occur right after Microsoft promises a commitment to predictable patch schedules means not just nothing the but opposite. Any commitment to doing better means that they recognize they haven't been doing well enough, and obviously it's not possible to do significantly better immediately; changing processes takes time, and observing the effects of those changes takes even longer.
So, no, this cluster of patches doesn't tell us anything in particular beyond what we already knew: That emergency patches are relatively common.
"quantum resistant forever" is too strong.
I've only taken fairly general master's level courses in quantum information and regular cryptography, but I agree with this overall sentiment. My math professors used to say that no asymmetric encryption scheme has been proved unbreakable; we only know if they haven't been broken so far. Assuming something is unbreakable is like saying Fermat's last theorem is unprovable — until one day it's proved. So to me "post quantum cryptography" is essentially a buzzword.
Yes, but... I think you're confusing some things. We're talking about AES, which is a symmetric encryption algorithm, not asymmetric.
Of course, no cryptographic construction has been "proven" secure, in the sense that mathematicians use the word "prove", not symmetric or asymmetric. Asymmetric schemes have an additional challenge, though, which is they have to have some sort of "trapdoor function" that mathematically relates a public key and a private key, and the public key has to be published to the attacker. Classical asymmetric cryptography is built by finding a hard math problem and building a scheme around it -- which means that a solution to the math problem breaks the algorithm.
Symmetric systems have it a bit easier, because the attacker doesn't get to see any part of the key or anything related to the key other than plaintext and corresponding ciphertext (though the standard bar is to assume the attacker has an oracle that allows them to get plaintext of arbitrary ciphertexts, i.e. the Adaptive Chosen Ciphertext attack, IND-CCA2). And the structure of symmetric ciphers isn't usually built around a specific math problem. Instead, they tend to just mangle the input in extremely complex ways. It's hard to model these mathematically, which makes attacking them with math hard.
In both cases, we are unable to prove that they're secure. When I started working on cryptography, the only basis for trust in algorithms was that they'd stood up to scrutiny for a long period of time. That was it. Over the last 20 years or so, we've gotten more rigorous, and "security proofs" are basically required for anyone to take your algorithm seriously today... but they aren't quite like "proofs" in the usual sense. They're more precisely called "reductions". They're mathematically-rigorous proofs that the security of the algorithm (or protocol) is reducible to a small set of assumptions -- but we have to assume those, because we can't prove them.
For most asymmetric schemes, the primary underlying assumption is that the mathematical problem at the heart of the scheme is "hard". Interestingly, there is one family of asymmetric signature schemes for which this is not true. SLH-DSA, one of the post-quantum algorithms recently standardized by NIST, provably reduces to one assumption: That the hash algorithm used is secure, meaning that it has both second pre-image resistance plus a more advanced form of second pre-image resistance. Collision resistance isn't even required! This is striking because we actually have quite a lot of confidence in our secure hash algorithms. Secure hash algorithms are among the easiest to create because all you need is a one-way function with some additional properties. And we've been studying hash functions very hard, for quite a long time, and understand them pretty well.
This means that one of our "new" post-quantum asymmetric algorithms is probably the very strongest we have, not only less likely to be broken than our other asymmetric algorithms, but less likely to be broken than our symmetric algorithms. If it were broken, it would be because someone broke SHA-256 (which, BTW, would break enormous swaths of modern cryptography; it's extremely hard to find a cryptographic security protocol that doesn't use SHA-256 somewhere), and unless that same research result somehow broke all secure hash functions, we could trivially repair SLH-DSA simply by swapping out the broken hash function for a secure one.
This is an entirely different model from the way we looked at cryptography early in my career. SLH-DSA doesn't have decades of use and attack research behind it. Oh, the basic concept of hash-based signatures dates back to the late 70s, but the crucial innovations that make SPHINCS and its descendants workable are barely a decade old! BUT we have a rigorous and carefully peer-reviewed security proof that demonstrates with absolute mathematical rigor that SLH-DSA is secure iff the hash function used in it is secure.
So... a relative newcomer is more trustworthy than the algorithms we've used for decades, precisely because we no longer rely on "hasn't been broken so far" as our only evidence of security.
As for AES, the subject of the discussion above, there is no security proof for AES. There's nothing to reduce it to. There are proofs that it is secure against specific attack techniques (linear cryptanalysis and differential cryptanalysis) that were able to defeat other block ciphers, but those proofs only prove security against those specific attacks, not other attacks that are not yet known. So for AES we really do rely on the fact that it has withstood 20+ years of focused cryptanalysis, and that no one has managed to find an attack that significantly weakens it. That could change at any time, with or without quantum computers.
SLH-DSA, however, is one that very well may be secure forever, against both classical and quantum attacks. The security proof doesn't even care about classical vs quantum, it just proves that any successful attack, no matter how it's performed, provides a way to break the underlying hash function. Therefore, if the hash function is secure, SLH-DSA is secure. It's an incredibly powerful proof, like many proofs by contradiction.
Antiprotons, the forbidden PopRocks
Reply "yes", then close and reopen this message to activate the link.
No matter how idiot-proof you make technology, God will always create a better idiot. That's why the right way to solve this problem is:
But IMO, the most important one is that last one. We would be a lot better off if the right to a speedy trial were taken seriously. If a year or more passes between committing a crime and being prosecuted, the threat of prosecution ceases to be a meaningful deterrent to crime.
If I were in charge, there would be two nationwide statutes of limitations added that apply to all crimes:
* I'm willing to consider arguments that these numbers should be slightly higher, but not dramatically so.
If legitimate extenuating circumstances outside the control of prosecution warrant a delay (e.g. the defendant being impossible to locate or in another country), a judge could order the statute of limitations tolled. But otherwise, the only exceptions should be in situations where a mistrial or similar forces a new trial (which obviously starts more than 30 days after the initial charges are filed). And even for a retrial, there should be a hard limit of maybe 90 days from the end of the previous trial or thereabouts.
This would result in a very large number of cases not getting prosecuted, but by forcing the prosecution to triage cases and bring important cases quickly, it would ensure that fear of being brought to justice would be a real deterrent to committing crimes. Right now, it is not. Good people don't (intentionally) commit crimes, because they have morality and ethics. Bad people do, because they have neither. Almost nobody avoids doing crime merely out of fear of punishment, and that's a bad thing.
The party adjourned to a hot tub, yes. Fully clothed, I might add. -- IBM employee, testifying in California State Supreme Court