Comment Re:All the new desktops.... (Score 1) 181
Yeah, that makes sense.
Of course, it's the nature of the open-source world that there will be a bunch of different projects that more or less all function similarly.
Yeah, that makes sense.
Of course, it's the nature of the open-source world that there will be a bunch of different projects that more or less all function similarly.
Quoting the original article, in the "Linux Mint" section:
"but replaces them with dodgy bits of its own, such as a confusing choice of not one, not two, but three Windows-like desktops,"
This is what led me to say that the article seems to consider having multiple desktops available a flaw.
I have this problem that, unlike what most people seem to think is obvious, I *don't* want my desktop to operate like Mac. I find using a Mac desktop is like using a text editor other than my preferred one (which is increasingly necessary as text editing is moving to being whatever javascript monstrosity is attached to the collaboration or notebook platform you're forced to use); I'm always fighting with it and trying to work around it's little assumptions that are different from what I want.
Lots of the Linux desktops have been moving in directions that are nominally to make them more Mac-like. I think they often don't really do, or they *would* if everybody else did what they thought everybody else should do. But, as a result, I recently went back to FVWM, because I can make it work the way I want to. I used xfce for a number of years, but as time went by, it was getting harder and harder to configure it to do what I want. Giving in to what I find as the extremely annoying trend of "client-side decorations" was the last straw for me.
FVWM's configuration system is *exactly* the sort of thing people point at when they try to say that it makes Linux unusable on the desktop. And, it's exactly what makes it usable for me.
The broader point is: the original article seemed to indicate that having a choice of desktops was a *flaw*. It confuses users, or something. But, from my point of view, having a choice of desktop managers is the killer feature of Linux on the desktop. You aren't stuck with the default assumptions of either Windows or Mac. Yeah, there are still a lot of default assumptions built in, but there's a lot more flexibility than you find in other worlds. The ascendence of GTK (linked with GNOME) and the assumptions it's trying to force on the LInux desktop world are not healthy, in my opinion, as it's going to make it harder and harder for people to configure desktops if they don't like the built-in assumptions of GTK. But, at least for now, you can still get things to work the way you want if what you want doesn't happen to match either the majority, or what somebody has convinced the majority to think they want (or at least accept).
The US Government made the mistake of thinking that it was in charge of the country, instead of Amazon.
Amazon will set them straight.
My favorite conspiracy theory was very short lived.
I remember being in Minnesota on the evening and night of January 31, 1999. Reports were starting to come in from Asia, Australia, etc., that clocks had rolled over to the year 2000, and pretty much everything was fine. We even get to Europe, and to GMT, and, yeah, everything's fine.
Here was the conspiracy theory: it was just the radio reports we were receiving over in the states. In fact, everything had fallen into flames and chaos, and they were trying to cover it up from the rest of the world so that there wouldn't be a premature panic as you got closer and closer to the international date line.
Like I said, a very short-lived conspiracy theory.
On the other hand, QED is itself a theoretical framework (of exactly the kind that the author of the article wants to see for as much knowledge as possible) that allows us to systematize and predict lots of behavior.
Yeah, there are input parameters into QED that we don't have an explanation for. (Why is the mass of the electron what it is? Etc.) However, it's a theory that provides a general description of the interaction of charged particles and photons, and from that we can model a wide range of behavior. We have underlying principles.
The QED analogy for the kind of AI "understanding" the author writes about would be if we had a bit list of correlations between particle energies going into a collision, and angular distributions, momentum distributions, etc., coming out of the collision. We could then feed a list of collision parameters in, and get a prediction for the probabilities of what comes out, but it all would be a black box where the operation of it is mysterious. If something didn't match the prediction we wouldn't know if it meant that our black box is breaking down, or if it's just the fact that the predictions are only supposed to be 95% some odd good anyway. In contrast, if we start seeing behavior that is not consistent with the theoretical predictions of QED, as long as the experimental significance is good, we *know* that we're seeing something that violates the theory, as opposed to just having a case that didn't come out right from our black box. That lets us know that we then need to seek for a deeper understanding to explain the new behavior.
The point isn't machine learning (or other kinds of pattern-finding) vs. human intuition. The point is stuff that seems to be working but we don't know why, vs. stuff that works and we have a general theory that explains why it works.
The Asprin example in the article he gives is a good one.
Lots of discoveries start with a correlation we don't understand, or with intuition. In the best cases, those then later get generalized to a theory that allows us to understand a general category of phenomena, and then even to predict new phenomena and let more things work. What's more, as he says in the article, when we start seeing patterns or behaviors that don't match our theory (e.g. the orbit of Mercury and Newton's gravity), we know that we're reaching the limits of our theory and have to try to expand our understanding. If all we have is empirical correlations (which is what neural-net style AIs give us, although highly opqaue correlations), then we can't tell if something is systematically off, or if we just need to expand the training set a bit.
Having an underlying theory to help us understand how things work is an extremely important part of intellectual development. The author's point is that the current AI boom is rapidly expanding the number of things we have that seem to work without an underlying understanding. This means we're building a bigger and bigger fragile superstructure of "what seems to work" on top of a foundation that's not growing fast enough to ground it.
But *somebody* understands how the car works. There's a key difference here.
There *is* a theory of how and why a car works the way it does. You don't have to know it to use it... but the fact that we (as a society) know that allows us to make cars in the first place, and also allows us to figure out how to fix them, etc.
Understanding really is important. And "intellectual debt", as described in the article, is real.
"There are too many different and diverse desktops."
"What should we do to solve the problem?"
"Create another one!"
Whether it's textbooks, or other reliable resources, we need SOMETHING to offset the conceptual damage that stackexchange does to all kinds of technical learning.
Some textbooks are much better than others. The same is going to be true of software and digital resources. The vast majority of software that's out there for learning is not nearly as good as what you might want it to be. Does the format have some potential advantages? Absolutely. But it's easier to implement and distribute really bad digital resources than it is to distribute (at least printed) textbooks of any sort. And, it's easier to put together websites and videos of small bits of concept one at a time than it is to put together a whole coherent textbook -- whether good or bad, in either case.
The real question is, with Red Hat a core contributor : how long before systemd becomes a required dependency?
In the linked Cosmos article there is this quote from one of the authors:
The situation is reminiscent to the problem Galileo had with the Catholic priests of his time – most refused to look through his telescope to observe the moons of Jupiter.
Obviously, this doesn't prove anything, but I like to say that "everybody who's wrong thinks he's Galileo". Referring to the Galileo affair is among science crackpoterry something like Godwin's law in Internet discussions
Stephen Hawking was at Caltech in the 1990s giving a public talk when he conceded this bet. He visited Caltech for a semester twice while I was in grad school there between 1990 and 1996. I remember one physics colloquium; I understood about the first five minutes of the talk. This was in the middle of an ongoing theoretical project where both of them were trying to answer the question: could an arbitrarily advanced civilization, constrained only by physics but not by financial or engineering considerations, construct a traversible wormhole? The question came about when Carl Sagan called up Kip to ask that question. (This was reported by Kip when he was giving a talk about black holes to the intro Physics course at Caltech; I was a TA at the time.) In the physics colloquium that Stephen was giving, he and Kip got into a bit of an argument at the end during questions, and I remember Stephen saying something along the lines of "even somebody as tough and powerful as you, Kip, wouldn't survive that".
Each time he visited, Stephen also gave a public talk, which was *extremely* well attended. Indeed, at at least one of them, I didn't make it into the auditorium where the talk itself happened, but into another auditorium on campus where they were (what we would today call) live streaming the talk. At the end, when Stephen was taking questions, it would take him a couple of minutes to compose the reply on his keypad thingy. To keep everybody from getting restless, Kip would talk to the audience. During one of these questions, Kip was telling everybody about the bet. When Stephen's answer came out, he'd decided not to answer the question, but instead conceded the bet to Kip. It was quite fun to watch.
Many people were there to see this; I'd be surprised if there weren't others reading this thread who had seen it.....
In what way is this relevant?
It is telling that all papers by this author and his collaborators seem to be in a closed ecosystem of citation where they only are cited each other. I am not familiar with the "Galaxies" journal. At least one of these papers is from A&A, which *is* a real peer-reviewed journal.
There are many red herrings here. First of all, the whole "we have a model that can explain galaxy rotation curves without dark matter" is not nearly as meaningful as some seem to say it is. There is a whole host of observations explained by dark matter, in detail, and with precision. Explaining just one of them doesn't do much if you can't explain all of the rest of the observations.
Likewise, the Big Bang model has a host of observations that support it, in detail, and with numerical precision.
The "electric universe" is not something that is worth paying attention to.
For popular-level information about the problems with the whole electric universe business, see this site: https://rationalwiki.org/wiki/...
With all the fancy scientists in the world, why can't they just once build a nuclear balm?