Forgot your password?
typodupeerror

Comment Re:Here we go again.... (Score 1) 44

That's plausible.

I still hate it though. My first version of Office was 4.3, which included Word 6.0 and was ostensibly for Windows 3.1. I'd previously used Clarisworks on Macintoshes in school and before that I used a ghetto cheap program that called itself a word processor but was more of a glorified text editor in MS-DOS that worked well with an Epson dot matrix printer's formatting, so for me Word was great. I felt like the bumpers from Clarisworks had been removed, I had a lot more control over what I could do to a document.

Ribbon feels like they decided that power users didn't matter, and also corresponds with the end of the free Wordpad light-duty word processor and long after Microsoft Works was killed off.

Comment Re:Here we go again.... (Score 3, Interesting) 44

They seem to have forgotten why some of their most popular applications became most popular in their respective categories, and that wasn't just leveraging their OS marketshare OEM install dominance. It was a combination of reasonably good UI design that had a degree of intuitiveness along with fairly easy access to more advanced features, with an added dash of the ability to use data from one application in another without major headaches. Arguably MS Office in the days before Ribbon and Metro UIs exemplify this.

Unfortunately they chose to change the UI for change's sake, ie, because users wouldn't recognize that they now had a shiny new version of the product if they didn't flagrantly change the UI, and they chose UI designs that frankly sucked. They also seem to have harmed that interoperability by trying to push too much of it when it doesn't fully work right.

Obviously there have been software companies that had products that for the professionals constantly using them were better, like WordPerfect to Word, but those didn't generally work well for both the power user and the casual user. Originally Microsoft had managed to bridge that gap. But Ribbon and Metro interfaces have harmed the power user, it's now harder to do things than it should be, and power users have incentive to look for software that gives them the features without the bloat.

I doubt that Microsoft is going to understand this in this revamp. They're going to try to cram some UI change solely for the purpose of making it different than the prior version, and even if it's now "native" it's still going to suck. And they're going to try to force any remaining users on prior versions of Windows off of those and onto Windows 11.

Comment Re:25,000 lines of code (Score 1, Interesting) 60

It might take one person one year to write 25k lines.

A year? I've regularly written that much in a month, and sometimes in a week. And, counter-intuitively, its during those sprints when I'm pumping out thousands of lines per day that I write the code that turns out to be the highest quality, requiring the fewest number of bugfixes later. I think it's because that very high productivity level can only happen when you're really in the zone, with the whole system held in your head. And when you have that full context, you make fewer mistakes, because mistakes mostly derive from not understanding the other pieces your code is interacting with.

Of course, that kind of focus is exhausting, and you can't do it long term.

How does a person get their head around that in 15 hours?

By focusing on the structure, not the details. The LLM and the compiler and the formatter will get the low-level details right. Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases, and then to scan the code for anomalies that make your antennas twitch, then dig into those and start asking questions -- not of product managers and developers, usually, but of the LLM!

But, yeah, it is challenging -- and also strangely addictive. I haven't worked more than 8 hours per day for years, but I find myself working 10+ hours per day on a regular basis, and then pulling out the laptop in bed at 11 PM to check on the last thing I told the AI to do, mostly because it's exhilarating to be able to get so much done, at such high quality, so quickly.

Comment Re:Was not expecting them to admit that (Score 1) 54

They had to say it that way, because the more accurate statement is that the dealership law unfairly advantages existing automakers.

Even the entrenched automakers don't want dealerships to exist, they would all prefer to sell directly. They have better ways to keep down competition at the federal level. Dealerships just take a cut of what they could be keeping all of if they didn't exist.

That's a valid point, though right now while they're facing competition from startups the dealerships do provide them with a moat that they want to preserve. If/when the startup threat is gone, the automakers will go back to hating the dealerships.

I think people forget how everyone laughed at Tesla because everyone knew that starting a new car company in the United States was impossible. Now we also have Lucid and Rivian. Maybe someday Aptera will manage to get off the ground. This is a novel situation for American carmakers.

Comment Re:Was not expecting them to admit that (Score 4, Informative) 54

>arguing it unfairly advantages startups

Way to say your dealers suck.

They had to say it that way, because the more accurate statement is that the dealership law unfairly advantages existing automakers. It's not about the dealerships being good or bad, it's about the fact that setting up a dealership network takes a lot of time and money and requiring it is a good way to keep new competition out.

Comment Re:The old guard bribed these restrictions (Score 4, Interesting) 54

into place to protect their oligopoly. Some blame it on "socialism" when it's really crony capitalism.

The correct term is "regulatory capture". Private businesses use the power of the state to protect, subsidize or otherwise benefit them and harm competitors and potential competitors. It's extremely common and the more pervasive the regulation is, the more common it is. Red tape and government procedures benefit entrenched players who have built the institutional structures and knowledge to deal with them.

This isn't to say that all regulation is bad... but a lot of it is. There was never any consumer benefit to banning direct sales. All regulations should be thoroughly scrutinized for their effects on the market, direct and indirect.

Comment Re:Good but they 'summarized' al the science. (Score 3, Insightful) 65

Anything that wasn't action, drama, or comedy was largely dropped and almost all of the science was quick summary explanations.

I think that's necessary. Providing explanations of depth comparable to the book would require a 10-hour movie. Squeezing the story down to feature length requires cutting a lot of exposition. In many books there's a lot of description that can be replaced with visuals, but it's pretty hard to do that with a lot of the science.

Comment Re:\o/ (Score 1) 75

uh, no. You didn't win.

Places like Bell Labs were more like university research centers than corporate dressing on mandatory-overtime grind. They were not expected to directly turn a profit as business units of the company, because what they did was to lay the groundwork for technology that the other business units could then adapt into products. The return on the investment paid into running them took years or even decades to realize. Without the pressures of needing to turn quarterly or even annual profits they weren't working their researchers to the bone and they were fostering a culture of internship for college students into joining their ranks as researchers to perpetuate the institutional knowledge.

Comment Re:LLMs can't explain themselves (Score 1) 40

One issue with the overall architecture (which is just statistical prediction) is that it can't really provide useful insights on why it did what it did.

I think you're describing the models from a year ago. Most of the improvements in capability since then (and the improvements have been really large) are directly due to changes that have the AI model talk to itself to better reason out its response before providing it, and one of the results of that is that most of the time they absolutely can explain why they did what they did. There are exceptions, but they are the exception, not the rule.

It's interesting to compare this with humans. Humans generally can give you an explanation for why they did what they did, but research has demonstrated pretty conclusively that a large majority of the time those explanations are made up after the fact, they're actually post-hoc justifications for decisions that were made in some subconscious process. Researchers have demonstrated that people are just as good at coming up with explanations for decisions they didn't make as for decisions they did! The bottom line is that people can't really provide useful insights on why they did what they did, they're just really good at inventing post-hoc rationales.

Comment Apply Betteridge's Law (Score 4, Insightful) 47

And the law of large numbers. Statistically, there will but patch clusters, the same way there are clusters of every other random-ish event. The fact that one happens to occur right after Microsoft promises a commitment to predictable patch schedules means not just nothing the but opposite. Any commitment to doing better means that they recognize they haven't been doing well enough, and obviously it's not possible to do significantly better immediately; changing processes takes time, and observing the effects of those changes takes even longer.

So, no, this cluster of patches doesn't tell us anything in particular beyond what we already knew: That emergency patches are relatively common.

Comment Re: Mac OS has already started to pester me (Score 1) 65

"quantum resistant forever" is too strong.

I've only taken fairly general master's level courses in quantum information and regular cryptography, but I agree with this overall sentiment. My math professors used to say that no asymmetric encryption scheme has been proved unbreakable; we only know if they haven't been broken so far. Assuming something is unbreakable is like saying Fermat's last theorem is unprovable — until one day it's proved. So to me "post quantum cryptography" is essentially a buzzword.

Yes, but... I think you're confusing some things. We're talking about AES, which is a symmetric encryption algorithm, not asymmetric.

Of course, no cryptographic construction has been "proven" secure, in the sense that mathematicians use the word "prove", not symmetric or asymmetric. Asymmetric schemes have an additional challenge, though, which is they have to have some sort of "trapdoor function" that mathematically relates a public key and a private key, and the public key has to be published to the attacker. Classical asymmetric cryptography is built by finding a hard math problem and building a scheme around it -- which means that a solution to the math problem breaks the algorithm.

Symmetric systems have it a bit easier, because the attacker doesn't get to see any part of the key or anything related to the key other than plaintext and corresponding ciphertext (though the standard bar is to assume the attacker has an oracle that allows them to get plaintext of arbitrary ciphertexts, i.e. the Adaptive Chosen Ciphertext attack, IND-CCA2). And the structure of symmetric ciphers isn't usually built around a specific math problem. Instead, they tend to just mangle the input in extremely complex ways. It's hard to model these mathematically, which makes attacking them with math hard.

In both cases, we are unable to prove that they're secure. When I started working on cryptography, the only basis for trust in algorithms was that they'd stood up to scrutiny for a long period of time. That was it. Over the last 20 years or so, we've gotten more rigorous, and "security proofs" are basically required for anyone to take your algorithm seriously today... but they aren't quite like "proofs" in the usual sense. They're more precisely called "reductions". They're mathematically-rigorous proofs that the security of the algorithm (or protocol) is reducible to a small set of assumptions -- but we have to assume those, because we can't prove them.

For most asymmetric schemes, the primary underlying assumption is that the mathematical problem at the heart of the scheme is "hard". Interestingly, there is one family of asymmetric signature schemes for which this is not true. SLH-DSA, one of the post-quantum algorithms recently standardized by NIST, provably reduces to one assumption: That the hash algorithm used is secure, meaning that it has both second pre-image resistance plus a more advanced form of second pre-image resistance. Collision resistance isn't even required! This is striking because we actually have quite a lot of confidence in our secure hash algorithms. Secure hash algorithms are among the easiest to create because all you need is a one-way function with some additional properties. And we've been studying hash functions very hard, for quite a long time, and understand them pretty well.

This means that one of our "new" post-quantum asymmetric algorithms is probably the very strongest we have, not only less likely to be broken than our other asymmetric algorithms, but less likely to be broken than our symmetric algorithms. If it were broken, it would be because someone broke SHA-256 (which, BTW, would break enormous swaths of modern cryptography; it's extremely hard to find a cryptographic security protocol that doesn't use SHA-256 somewhere), and unless that same research result somehow broke all secure hash functions, we could trivially repair SLH-DSA simply by swapping out the broken hash function for a secure one.

This is an entirely different model from the way we looked at cryptography early in my career. SLH-DSA doesn't have decades of use and attack research behind it. Oh, the basic concept of hash-based signatures dates back to the late 70s, but the crucial innovations that make SPHINCS and its descendants workable are barely a decade old! BUT we have a rigorous and carefully peer-reviewed security proof that demonstrates with absolute mathematical rigor that SLH-DSA is secure iff the hash function used in it is secure.

So... a relative newcomer is more trustworthy than the algorithms we've used for decades, precisely because we no longer rely on "hasn't been broken so far" as our only evidence of security.

As for AES, the subject of the discussion above, there is no security proof for AES. There's nothing to reduce it to. There are proofs that it is secure against specific attack techniques (linear cryptanalysis and differential cryptanalysis) that were able to defeat other block ciphers, but those proofs only prove security against those specific attacks, not other attacks that are not yet known. So for AES we really do rely on the fact that it has withstood 20+ years of focused cryptanalysis, and that no one has managed to find an attack that significantly weakens it. That could change at any time, with or without quantum computers.

SLH-DSA, however, is one that very well may be secure forever, against both classical and quantum attacks. The security proof doesn't even care about classical vs quantum, it just proves that any successful attack, no matter how it's performed, provides a way to break the underlying hash function. Therefore, if the hash function is secure, SLH-DSA is secure. It's an incredibly powerful proof, like many proofs by contradiction.

Slashdot Top Deals

Your mode of life will be changed to EBCDIC.

Working...