I certainly believe that there are new things under the sun. I just don't believe that there are as many of them as our trendsetting media would like us to think. Come to think of it, I'll bet that the truly new things under the sun are seldom well covered. I guess Sturgeon's Law applies.
Or the paraphrase, "History repeats itself because nobody listened the first time." (In practice, the singular "first time" is insufficient.)
The baseline study project will collect anonymous genetic and molecular information from 175 people and later thousands more to create the complete picture of what a healthy human being should be.
The baseline study will help researchers detect killers such as heart disease and cancer far earlier, pushing medicine more toward prevention rather than the treatment of illness.
According to Google, the information from Baseline will be anonymous and its use will be limited to medical and health purposes. Data won't be shared with insurance companies."
"To grow these bacteria, the team collects sediment from the seabed, brings it back to the lab, and inserts electrodes into it. First they measure the natural voltage across the sediment, before applying a slightly different one. A slightly higher voltage offers an excess of electrons; a slightly lower voltage means the electrode will readily accept electrons from anything willing to pass them off. Bugs in the sediments can either "eat" electrons from the higher voltage, or "breathe" electrons on to the lower-voltage electrode, generating a current. That current is picked up by the researchers as a signal of the type of life they have captured.""
Link to Original Source
We had that "pull-down mirror" in our 2008 bottom-of-the-line Sienna. I called it the "bratfinder", at the time.
There's the snarky answer, and what I suspect is the real answer.
First, systemd and everything associated with is just so kewl and shiny that's it's a privilege to even use any of it, which makes it all the more amazing that they're actually welcoming us to do so, instead of making us fight for a place in line.
Second, X11 goes way back before anyone was really concerned with security. I suspect from a core competence point of view, the X11 coders are far more comfortable and far more engaged with the graphical display code than the input side. I get the impression that a lot of effort was spent in properly cleaning and separating the root-requiring functionality. I know I've read of KMS and DRI work for years now. It's been a long road, and I believe it may have only been in the past year that the display side has gotten to the point where they could think about going rootless.
I also suspect that the input device part is not their core competence - they'd like events coming in from "elsewhere" and get back to their graphics work. So along comes systemd, saying, "We'll handle the gnarly details of console access and security for you," and X said OK, if only in the spirit of modularity and going back to their graphics work. (Graphics work includes processing the inputs, not just drawing outputs - I think they'd just like the inputs to be clean and handed to them.)
I've also found that sadly enough, there are plenty of people around a big company who are really good at appearing essential, while really doing nothing themselves and in fact are very good at creating work for others. Unfortunately they also tend to get retained through job cuts, because they appear so essential.
Though I work in a big company we generally manged to have a small, well-focused team. That makes it a good place to work, as long as you can keep your head down, have fun, and not see the chaos and decay around you.
OK, I'll feed the troll. Either X or an X wrapper is suid root. Find the right hole in X, and you've got root. I presume that X or an X wrapper tries to do the best it can, drops capabilities, etc. But it would still be better to not be root at all.
Are you able to explain more?
My impression is that there were 2 issues with non-root X - mode setup and input device management. KMS and DRI2/DRI3 take care of the former, and I'm under the impression that systemd-logind takes care of the latter. But ultimately these are all just kernel interfaces - if systemd-logind has a root-helper and makes a series of kernel calls to manage the input devices, then that same job could be done by some other piece of software.
Again, do you understand the base mechanism at work here?
As we better understand the universe, we find gaps between reality and our understanding. We then try to extend our understanding to better match reality, and that means filling in those gaps. Sometimes it takes many tries to fill in a gap, or at least make it smaller.
Negative mass is one of those attempts, and it's worth noting that they aren't clinging to the concept, they're simply suggesting that it's one possibility that can be tested. In other words, they actually are using Occam's Razor. In this realm, nothing is simple, which makes the Razor harder to use.
I'm thinking of the code-morphing, similar to Transmeta. From where I learned about it, the runtime translation target was called micro-ops. We have different definitions. Someone I once knew referred to micro-ops (my definition) as "caveman primitives."
Still, it's an internal CISC->RISC translation, and the retirement unit hides that when it's all done.
"Only about one in 100 trillion proton-proton collisions would produce one of these events," said Marc-André Pleier, a physicist at the U.S. Department of Energy's Brookhaven National Laboratory who played a leadership role in the analysis of this result for the ATLAS collaboration. "You need to observe many [collisions] to see if the production rate is above or on par with predictions," Pleier said. "We looked through billions of proton-proton collisions produced at the LHC for a signature of these events—decay products that allow us to infer like Sherlock Holmes what happened in the event."
The analysis efforts started two years ago and were carried out in particular by groups from Brookhaven, Lawrence Berkeley National Laboratory, University of Michigan, and Technische Universität Dresden, Germany."
I suspect you're confusing micro-ops with microcode.
Current architectures (not all, but not just Intel) decompose the user-visible instruction set into a stream of micro-ops, (more primitive instructions) and send that stream to a dispatch unit. The dispatch unit resolves dependency issues and as requirements are met, sends the micro-ops to one of a series of execution units. As micro-ops complete, their results are sent to the retirement unit. Note that between dispatch and retirement, the architectural registers have effectively disappeared, and are reassigned at retirement.
Microcode is a completely different thing - usually the opcode is translated into a subroutine entry point, and a (typically) classic Harvard-style computing engine interprets the user-visible instruction set. But it's all in lock-step, not the controlled chaos of micro-ops.
Me too. I'd just rather postpone that day as much as possible, and have a good time while getting there.
On an achy day, my mother used to say, "Never grow old." However upon further consideration, I think growing old is usually preferable to failing to.
(Many caveats apply, "Growing old" is meant in the physical sense, of course making lifestyle choices to retain capacity. "Growing old" in the mental sense is also something of a choice.)
Link to Original Source