Forgot your password?
typodupeerror

Comment Re:Dolby is run by fuckwads (Score 1) 42

Your no true Scotsman fallacy is showing you don't even know what a Scotsman looks like. Virtually 100% of patent holders sit on all their patents for the entire duration of the patent.

That's because virtually 100% of patent holders use their patents defensively.

waiting for the patented technology to be ingrained in the industry

Dolby actively used their patents and actively defended them. They created that technology and marketed it heavily. They didn't sit around and wait. Just because they make most of their money from licensing doesn't make them a patent troll any more than every university in the world is suddenly a patent troll by your definition.

You missed the part where they knowingly allowed a patent to become part of a published open standard and ignored it for an entire decade, *then* started going after violations.

Oh, actually, it's worse than that. Dolby acquired these patents from General Electric two years ago. So in this matter, they quite literally ARE patent trolls. They did nothing to create this technology, but rather bought the patents to enrich themselves by becoming a leech on the industry now that companies are abandoning their codecs in favor of codecs whose encoders don't involve royalties.

Yes, but using them offensively after sitting on them violates the doctrine of Laches.

This isn't offensive. By all accounts their licensed product has been taken without a license paid.

You obviously don't understand patent law terminology, so let me give you a refresher:

  • Defensive use of patents - patents held until someone sues you, then used to retaliate and make the other company's lawsuit more expensive and complex, usually resulting in a cross-licensing agreement.
  • Offensive use of patents - suing someone else over the patent without having been previously sued by that someone else.

Suing multiple companies for violating a patent without getting sued first is the very definition of offensive use of a patent.

In effect, they sat on the patents so that people would end up depending on AV1

Congrats on falling into a vortex of ignorance. Headlines are fun to latch on to, especially useless ones likes Slashdot headlines. Dolby isn't suing Snapchat for AV1. Dolby is suing Snapchat for not paying HEVC license. AV1 is just caught up in as a listed example due to Snapchat's HEVC-AV1 transcoder being one of the infringing items on the docket.

Those are actually separate lawsuits. (See link above.) The AV1 lawsuit is suing to stop them from using AV1 and force them to use a Dolby-licensed codec. They're also suing a Chinese hardware maker over AV1 at the same time.

At this point, it would be entirely reasonable for a judge to declare that because they failed to act against AOMedia

That's not how the law works. AOMedia has infringed zero patents. You can't infringe a patent by creating an algorithm and publishing it online. If that were the case you may as well say the US Patent Office is infringing patents. Businesses using products infringe patents.

The hell you can't. Patent infringement occurs on creating an instance of an invention. The moment they create source code for the software (an instantiation of the patent), they have violated the patent. It doesn't have to be instantiated into hardware or used by a business to be a violation. The patent violations began when AOMedia distributed the first beta versions a decade ago. The original patent holder (GE) did not sue.

To be fair, the reference implementation may not have been directly created or distributed by AOMedia, in which case the same applies, but to whatever company actually created and distributed it. This is largely an unimportant detail.

Businesses using products *also* infringe patents, which IMO, is a bad thing, but that's a separate discussion.

they lost their right to sue AOMedia for damages in creating the patented technology

Literally no one is suing AOMedia.

You literally didn't understand what I said.

Patent exhaustion occurs when a product is sold by someone who has the right to sell something that violates a patent, which typically means that either they own the patent or they paid licensing fees. It prevents someone from then suing downstream customers. And there is a six-year statute of limitations on suing over a patent violation. What I'm arguing is that:

  • Distribution of open source software effectively occurs exactly once per version, because the redistribution permission inherent in open source software makes it impossible to determine whether a copy of the software was obtained directly from the creator on a particular date or from someone else who previously got it from the creator.
  • Open source distribution is effectively a sale for patent purposes, just at zero cost.
  • That sale occurred a decade ago when AOMedia distributed the reference implementation.
  • Because no objection was made to that sale (against AOMedia) during the statutorily limited 6-year period, that sale should be considered to be an authorized sale, in which case patent exhaustion occurred on the results of that sale.
  • All copies of the original reference implementation and their derivatives are therefore untouchable.

This is a legal theory. To my knowledge, it has never been tested in court, largely because companies do not do what Dolby is doing, suing companies for using open source reference implementations or their derivatives nearly a decade after their release. And it should be clear that this theory applies only to patents in the context of software.

Comment Re:LLMs can't explain themselves (Score 1) 39

One issue with the overall architecture (which is just statistical prediction) is that it can't really provide useful insights on why it did what it did.

I think you're describing the models from a year ago. Most of the improvements in capability since then (and the improvements have been really large) are directly due to changes that have the AI model talk to itself to better reason out its response before providing it, and one of the results of that is that most of the time they absolutely can explain why they did what they did. There are exceptions, but they are the exception, not the rule.

It's interesting to compare this with humans. Humans generally can give you an explanation for why they did what they did, but research has demonstrated pretty conclusively that a large majority of the time those explanations are made up after the fact, they're actually post-hoc justifications for decisions that were made in some subconscious process. Researchers have demonstrated that people are just as good at coming up with explanations for decisions they didn't make as for decisions they did! The bottom line is that people can't really provide useful insights on why they did what they did, they're just really good at inventing post-hoc rationales.

Comment Apply Betteridge's Law (Score 4, Insightful) 35

And the law of large numbers. Statistically, there will but patch clusters, the same way there are clusters of every other random-ish event. The fact that one happens to occur right after Microsoft promises a commitment to predictable patch schedules means not just nothing the but opposite. Any commitment to doing better means that they recognize they haven't been doing well enough, and obviously it's not possible to do significantly better immediately; changing processes takes time, and observing the effects of those changes takes even longer.

So, no, this cluster of patches doesn't tell us anything in particular beyond what we already knew: That emergency patches are relatively common.

Comment Re: Mac OS has already started to pester me (Score 1) 64

"quantum resistant forever" is too strong.

I've only taken fairly general master's level courses in quantum information and regular cryptography, but I agree with this overall sentiment. My math professors used to say that no asymmetric encryption scheme has been proved unbreakable; we only know if they haven't been broken so far. Assuming something is unbreakable is like saying Fermat's last theorem is unprovable — until one day it's proved. So to me "post quantum cryptography" is essentially a buzzword.

Yes, but... I think you're confusing some things. We're talking about AES, which is a symmetric encryption algorithm, not asymmetric.

Of course, no cryptographic construction has been "proven" secure, in the sense that mathematicians use the word "prove", not symmetric or asymmetric. Asymmetric schemes have an additional challenge, though, which is they have to have some sort of "trapdoor function" that mathematically relates a public key and a private key, and the public key has to be published to the attacker. Classical asymmetric cryptography is built by finding a hard math problem and building a scheme around it -- which means that a solution to the math problem breaks the algorithm.

Symmetric systems have it a bit easier, because the attacker doesn't get to see any part of the key or anything related to the key other than plaintext and corresponding ciphertext (though the standard bar is to assume the attacker has an oracle that allows them to get plaintext of arbitrary ciphertexts, i.e. the Adaptive Chosen Ciphertext attack, IND-CCA2). And the structure of symmetric ciphers isn't usually built around a specific math problem. Instead, they tend to just mangle the input in extremely complex ways. It's hard to model these mathematically, which makes attacking them with math hard.

In both cases, we are unable to prove that they're secure. When I started working on cryptography, the only basis for trust in algorithms was that they'd stood up to scrutiny for a long period of time. That was it. Over the last 20 years or so, we've gotten more rigorous, and "security proofs" are basically required for anyone to take your algorithm seriously today... but they aren't quite like "proofs" in the usual sense. They're more precisely called "reductions". They're mathematically-rigorous proofs that the security of the algorithm (or protocol) is reducible to a small set of assumptions -- but we have to assume those, because we can't prove them.

For most asymmetric schemes, the primary underlying assumption is that the mathematical problem at the heart of the scheme is "hard". Interestingly, there is one family of asymmetric signature schemes for which this is not true. SLH-DSA, one of the post-quantum algorithms recently standardized by NIST, provably reduces to one assumption: That the hash algorithm used is secure, meaning that it has both second pre-image resistance plus a more advanced form of second pre-image resistance. Collision resistance isn't even required! This is striking because we actually have quite a lot of confidence in our secure hash algorithms. Secure hash algorithms are among the easiest to create because all you need is a one-way function with some additional properties. And we've been studying hash functions very hard, for quite a long time, and understand them pretty well.

This means that one of our "new" post-quantum asymmetric algorithms is probably the very strongest we have, not only less likely to be broken than our other asymmetric algorithms, but less likely to be broken than our symmetric algorithms. If it were broken, it would be because someone broke SHA-256 (which, BTW, would break enormous swaths of modern cryptography; it's extremely hard to find a cryptographic security protocol that doesn't use SHA-256 somewhere), and unless that same research result somehow broke all secure hash functions, we could trivially repair SLH-DSA simply by swapping out the broken hash function for a secure one.

This is an entirely different model from the way we looked at cryptography early in my career. SLH-DSA doesn't have decades of use and attack research behind it. Oh, the basic concept of hash-based signatures dates back to the late 70s, but the crucial innovations that make SPHINCS and its descendants workable are barely a decade old! BUT we have a rigorous and carefully peer-reviewed security proof that demonstrates with absolute mathematical rigor that SLH-DSA is secure iff the hash function used in it is secure.

So... a relative newcomer is more trustworthy than the algorithms we've used for decades, precisely because we no longer rely on "hasn't been broken so far" as our only evidence of security.

As for AES, the subject of the discussion above, there is no security proof for AES. There's nothing to reduce it to. There are proofs that it is secure against specific attack techniques (linear cryptanalysis and differential cryptanalysis) that were able to defeat other block ciphers, but those proofs only prove security against those specific attacks, not other attacks that are not yet known. So for AES we really do rely on the fact that it has withstood 20+ years of focused cryptanalysis, and that no one has managed to find an attack that significantly weakens it. That could change at any time, with or without quantum computers.

SLH-DSA, however, is one that very well may be secure forever, against both classical and quantum attacks. The security proof doesn't even care about classical vs quantum, it just proves that any successful attack, no matter how it's performed, provides a way to break the underlying hash function. Therefore, if the hash function is secure, SLH-DSA is secure. It's an incredibly powerful proof, like many proofs by contradiction.

Comment Re:This reminds me of something (Score 2) 47

Reply "yes", then close and reopen this message to activate the link.

No matter how idiot-proof you make technology, God will always create a better idiot. That's why the right way to solve this problem is:

  • Make it as hard as possible for users to accidentally do something that is irreversible, and as easy as possible to roll back even serious mistakes. This means, among other things, keeping more than just a single backup. (Apple, I'm talking about your borderline useless iCloud backups here when I say that.)
  • Make SSNs easily changeable and less easily guessable.
  • Make it technologically as hard as possible to send out messages in a way where the sender's identity can be forged to look like it comes from someone else.
  • Aggressively prosecute phone companies who allow calls and text messages onto their network from fake phone numbers.
  • Aggressively track down, prosecute, and very publicly make an example of every person who tries to pull one of these scams, along with the people who employ them, so that anybody considering pulling such a scam is aware of previous scammers who have ended up behind bars for thirty to life within six months of starting their scam.

But IMO, the most important one is that last one. We would be a lot better off if the right to a speedy trial were taken seriously. If a year or more passes between committing a crime and being prosecuted, the threat of prosecution ceases to be a meaningful deterrent to crime.

If I were in charge, there would be two nationwide statutes of limitations added that apply to all crimes:

  • Charges must be filed within six months* of law enforcement having solid evidence showing who committed a crime. Just cause must be shown for any exceptions to this. If the law enforcement fails to show that they received significant supporting evidence that made it possible to bring their case during the six month period prior to filing charges, the charges are automatically dropped.
  • Cases must begin within thirty days* of bringing charges. If the case cannot begin within 30 days, the charges are dropped.

* I'm willing to consider arguments that these numbers should be slightly higher, but not dramatically so.

If legitimate extenuating circumstances outside the control of prosecution warrant a delay (e.g. the defendant being impossible to locate or in another country), a judge could order the statute of limitations tolled. But otherwise, the only exceptions should be in situations where a mistrial or similar forces a new trial (which obviously starts more than 30 days after the initial charges are filed). And even for a retrial, there should be a hard limit of maybe 90 days from the end of the previous trial or thereabouts.

This would result in a very large number of cases not getting prosecuted, but by forcing the prosecution to triage cases and bring important cases quickly, it would ensure that fear of being brought to justice would be a real deterrent to committing crimes. Right now, it is not. Good people don't (intentionally) commit crimes, because they have morality and ethics. Bad people do, because they have neither. Almost nobody avoids doing crime merely out of fear of punishment, and that's a bad thing.

Comment Re:Dolby is run by fuckwads (Score 1) 42

Errr no, they very much do make technology. Quite a bit of it actually. Lots of what is marketed under Dolby Vision and Dolby Audio was developed by themselves and they spend a quarter of a billion dollar every year on R&D. Heck even the noise cancelling ability in video conferencing software along with music detection was largely developed by Dolby.

I would still consider them patent trolls at this point. Legitimate patent holders use patents immediately or hold them to use defensively. They do not sit on patents for an entire decade, waiting for the patented technology to be ingrained in the industry, and then use them to earn income. The patent having been created in-house rather than acquired doesn't change the fact that the behavior is fundamentally similar.

Just because you don't see their products on the shelves at Best Buy doesn't mean they don't make those either. They produce reference monitors for colour grading Dolby Vision content, they have an entire line of cinema audio speakers, and they make the rest of the cinema audio stack as well as a first party product, including multichannel amplifiers and audio pre-processors for Atmos content - a codec they also developed from the ground up.

Dolby Atmos was 2012. Dolby Vision was 2014. How are they not basically a non-practicing entity at this point?

The fact they sit on a bunch of related patents is just the nature of any R&D development.

Yes, but using them offensively after sitting on them violates the doctrine of Laches. In effect, they sat on the patents so that people would end up depending on AV1, because if they sued too early, AOMedia would have designed around the patent, and they would get nothing. So they deliberately delayed action to cause prejudice to the defendant.

At this point, it would be entirely reasonable for a judge to declare that because they failed to act against AOMedia within the 6-year window prescribed by patent law, they lost their right to sue AOMedia for damages in creating the patented technology, and that patent exhaustion applies to all downstream users. And if that happens, I will laugh so hard.

Comment Re: Why are lawsuits allowed against end users? (Score 1) 42

Imagine your little startup patents something and is egregious copied by a large, rich company. If the startup doesn't immediately have the funds to sue, the other company just gets to use the tech without the patent with no consequences. Seems unfair.

Dolby is not a startup. It was founded in 1965.

Also, the doctrine of Laches says you cannot unreasonably delay filing a lawsuit. Waiting ten years from the first release of the specification is clearly unreasonable. Waiting eight years from the first finished implementation is clearly unreasonable.

The bigger problem for Dolby is that patent law won't let you recover damages at all for damages more than six years ago, and the standard has been available for eight. So unless somehow this is some wacky patent where Dolby claims that some use of an otherwise non-patent-protected codec is patented (which should almost certainly result in that patent getting overturned for obviousness), Dolby should be laughed out of court.

But I'm sure they're hoping that Snapchat caves and agrees to go back to a Dolby codec or pay them royalties rather than fight them in court. This is patent troll behavior. Dolby has effectively become a patent troll, IMO.

Comment Re:Why are lawsuits allowed against end users? (Score 2) 42

Unfortunately, from a legal point of view, AOMedia hasn't done anything against Dolby. It's simply created a video compression codec. It doesn't use the codec, it just publishes documentation on how to use it.

From a patent law point of view, it is illegal to create something that violates a patent, not just to use it. Patent law kicks in when you create, offer for sale, sell, import, or otherwise distribute a patented invention.

IMO, one of the biggest flaws in patent law is that it covers the use of inventions in all cases except for patent exhaustion (sale of an already-licensed product). With the exception of pure process patents, IMO, that should not be a violation, as a user has no realistic way of knowing that something they bought violates someone else's patent, and should not even need to worry about such nonsense.

This "feature" of patent law exists solely to give the patent holder more leverage to screw the company accused of violating the patent by holding their innocently infringing customers liable, causing irreparable reputational damage to both companies, irreparable harm to countless others, etc., and it should have been eliminated decades ago.

That said, having seen this behavior by Dolby, I hereby vow to never knowingly buy any product that they manufacture, nor support their products or technology, nor use it except in situations where the content creator or distributor leaves me no alternative. They've gone from being a legitimate technology company to a glorified patent troll. Instead of innovating and making the world better to enrich themselves, they are suing anybody and everybody and making the world worse to enrich themselves.

Moreover, absent gross incompetence by Dolby's legal counsel, it seems clear that Dolby flagrantly and willfully violated the doctrine of Laches to allow damages to accumulate for eight full years from the final release (and ten years from the first specification release), thus allowing AV1 to become the dominant codec so that they could then predatorily use their patents to squeeze money out of the industry. Their behavior is nothing short of unconscionable, and whether due to incompetence or malice, their legal counsel should be formally sanctioned for it.

Finally, if Dolby wins, it is paramount that the entire technology industry agree to never license *any* future Dolby technologies going forwards, because doing so will only encourage them to use the patent system to prevent free and open standards. The only way to prevent patent abuse is to stop feeding the companies that abuse patents.

It is my fundamental believe that data formats should not be allowed to be protected by copyright or patents under any circumstances, because doing so fundamentally violates the rights of the owners and creators of that content. It makes it so that users can potentially lose access to data that they created. And this is wholly unacceptable for the same reason that renting software is unacceptable.

In short, Dolby and its lawyers can go f**k themselves with a shovel.

Comment Re:double standards (Score 1) 80

they're all 100% letting the Epstein saga slide.

Almost makes you want a Putin like strong man to sort them all out. right haruchai

If Putin had been around, he'd have been in the Epstein files, too. It's vanishingly-unlikely that any strongman like that wouldn't also be a sexual abuser. It's all part of the same disrespect for others.

Comment Re:Does no one remember? (Score 1) 182

Not as remarkable as Linux, which somehow has become so despite (virtually) no paid developers.

Linux has a large number of highly-paid developers. If you look at the kernel, specifically, there are basically no unpaid volunteers contributing significantly to it, and there haven't been for a long time. The right way to understand kernel development is as a collaboration between a large number of corporations, each of whom contributes the paid work of skilled engineers and most of which also contribute cash to a foundation that employs the highly-paid engineers who coordinate all of the work (notably Linus, who makes a seven figure salary -- honestly, ought to be eight figures, but he's certainly not hurting).

If you look beyond the kernel to the other tools and desktop environments, the volunteer participation rises significantly, but there's also a lot of paid work.

Comment Re:Why do we trust the big ones? (Score 1) 64

We are not going to get AGI this century.

You cannot possibly know that.

AGI is not a question of throwing more computing power at the problem. Something fundamental is missing and we have no idea what.

This seems plausible, but it implies that you cannot possibly know whether we're going to get AGI this century. If it's true, it means that we'll get AGI when we discover that as-yet-missing knowledge, and there's no way to predict when that might happen. It might have happened yesterday and we just don't know it yet. What is certain is that (a) the knowledge exists and (b) we're looking for it, really hard.

Comment Re:Why do we trust the big ones? (Score 1) 64

Y2K is a better example. Y2K could have been a castrophe. A decade before it happened we started working to fix all the systems. Hundreds of millions of dollars (maybe billions) were spent on Y2K remediation. Then Y2K came and... nothing much happened. Lots of people pointed and said "Haha! All that money spent fixing the problem was a waste!", but they were wrong. All of the money spent fixing the problem fixed the problem.

This is what we have to do with cryptography and quantum computers. If we wait until practical QCs arrive, we'll be in big trouble. Not only will it take years to replace all of the classical crypto infrastructure, so we need to do the work before the QCs arrive, there are some cases where there will be no possibility of remediation. There are two major categories:

1. Harvest-now-decrypt-later (HNDL). Any cases where data needs to be protected for decades is subject to attacks that involve storing the data now and holding it until quantum computers can decrypt it. We undoubtedly have a lot of data that is already stored for later decryption, but we want to avoid increasing that risk further.

2. Hardware trust. Secure hardware requires trusted firmware, which requires burning public keys and verification algorithms into ROM, and many of these devices will be in service for decades. So we need to be able to deliver secure firmware updates for decades, using keys and algorithms we burn into ROM now. This is particularly relevant to me, because I'm working on firmware for automobiles, which have a 5-year development window, and a ~20-year (or more!) service life. So I'm working on systems that need to be securable through 2051, and it's pretty important because these vehicles have some degree of self-driving capability. A vulnerability that enables mass compromise and takeover could be used to mount a horrific terror attack.

So, yes, this matters. Probably. It's possible that practical quantum computing will never emerge, but given the tremendous progress over the last few years, that seems like a bad bet. Google's 2029 target is wise.

Comment Re: Mac OS has already started to pester me (Score 2) 64

AES-256 will remain quantum resistant forever. QCs only get you a halving of the bits for block-ciphers.

These statements are too strong -- in both directions!

First, although Grover's algorithm is proven to be the optimal quantum algorithm for generalized search, you don't necessarily need a generalized search algorithm to break a block cipher. Block ciphers have internal structure that may be exploitable by quantum algorithms. Indeed researchers have made some progress in designing quantum algorithms to break Feistel network-based ciphers (which AES is not, but the previous standard cipher, DES, is). The result of that work, Simon's algorithm, is not a practical way to break Feistel network ciphers, but more research may improve it. So it's certainly possible that researchers could identify AES substructure that can be attacked with quantum computers, and this could result in a quantum algorithm that breaks AES. We have no hint of anything like that, and no one is really considering it to be likely, but "quantum resistant forever" is too strong.

Second, the claim that QCs get you a halving of the bits for block ciphers using Gover's algorithm is technically correct, but overstates the practical reality. Even assuming we had large, reliable and cheap quantum computers, the way Grover's algorithm would be applied to breaking AES requires 2^(n/2) sequential operations, each of which is a non-trivial quantum circuit. Moreover, other practical considerations, which are way too complicated to get into here -- in large part because I don't really understand them; I'm repeating what more-knowledgeable colleagues say here -- mean that AES-128 will probably retain ~90 bits of security, which means it will probably remain secure forever, assuming no better-than-Grover's algorithm exists.

Comment Re:NIST algorithms (Score 1) 64

Wasn't NIST shown to have been compromised by the NSA? Is this still the case?

No.

What was shown is that one random number generation algorithm was found to have been backdoored at the NSA's request. There is no evidence that this has ever happened with any of the other NIST-standardized algorithms, and it's also known that the NSA has stepped into strengthen other NIST algorithms (notably, DES -- though the the NSA both strengthened that by improving the S boxes and weakened it by asking for a smaller key size, though that wasn't a secret weakening; everyone understands the implications of smaller key sizes, and where necessary there are easy workarounds, hence triple-DES, which is still secure today).

All of this was in the past when the NSA held a considerable lead in cryptographic knowledge over academic cryptographers. It seems very unlikely that this is the case any more, and in fact at this point basically all novel cryptographic knowledge seems to be flowing in the other direction.

The only other case that people do wonder about a little bit is the choice of the elliptic curves used for ECDSA and ECDH. NIST never published any rationale for those curve choices, or any publicly-verifiable information about their choices. Most likely this is because there was no systematic choice process; they chose curves at random, verified their security properties and went with it, but it's possible that these specific curves have some internal structure that the NSA knows about and can exploit, but the rest of the world doesn't. After decades of fruitless scrutiny this isn't likely, but it's possible, which is why Daniel J. Bernstein ensured that his Curve25519 did have a clear, rational and publicly-verifiable construction process, and that's part of why many systems prefer Ed25519 and X25519 over ECDSA and ECDH (Curve25519 is also faster and has smaller public keys).

In the case of the PQC algorithms, all of them were created by academic cryptographers, and there are no suspicious modifications. So if the NSA has backdoored them they've done it really, really subtly. IMO, it's not a risk worth worrying about unless you're specifically defending against the NSA, and probably not even then. For commercial work, like I do, it's sufficient to trust that if the NSA is smart enough to have learned from the Dual_EC_DRBG debacle. In that case, not only did the backdoor eventually come out fully, along with evidence that the NSA had paid at least one company to use that algorithm, academic cryptographers suspected its existence even before the standard was published, and spoke publicly about it. That was probably what motivated the NSA to pay for its use, since it was under a cloud of suspicion from the beginning. So it was a very foolish move by the NSA, driven by extreme and obviously unjustified overconfidence in their lead in cryptographic knowledge.

Slashdot Top Deals

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...