QNaN or SNaN? I guess it depends if you need more braaaaaaaaains.
QNaN or SNaN? I guess it depends if you need more braaaaaaaaains.
I admit to being a bit of a smartass.
So... does anyone actually put a set top box on top of their TV set these days? Once upon a time, TVs were deep enough front-to-back to support this; these days, most aren't.
Or is this a term that was once accurate, but will never be accurate again, like "dialing" a phone? It's been a long time since phones had dials, unless they're being purposefully retro.
He reused the MINIX filesystem layout, and initially hosted builds on MINIX, but to my knowledge he never directly incorporated code from MINIX. Some have claimed that, but no claim has ever stuck, especially given that Andrew Tanenbaum himself agrees that Linux didn't annex any MINIX code directly.
It appears Wikipedia's account jibes with my memory.
I can't tell if you're trying to be humorous.
The rationale given is: "The kernel now keeps timestamps relative to the system boot time. Among other things this fixes bogus uptime readings if the system time is altered."
Presumably, this means the internal timestamps Hurd uses are now all monotonically increasing, regardless of any changes to the system time. Obviously, there's a relationship between the internal timestamp and what POSIX calls time_t (and related such datatypes). As I read it, they've decoupled the notion of system time (ie. something that resembles what you'd read from a clock, representing time and date as humans understand it, and subject to humans or network time daemons messing with that setting) from the internal timestamps it uses for computing the relative passage of time, such as 'uptime', network timeouts, etc.
According to the release:
The kernel now allows non-privileged users to wire a small amount of memory.
This is not a typo. Wiring memory means pinning it in memory so it cannot be paged out. This is potentially important both for security and real-time applications. On the security front, memory containing keys and passwords should be wired to prevent it going to disk. On the real-time front, if you can fit your working set in wired memory, you can be guaranteed you won't suffer a paging fault while you stay within that working set.
In Linux / POSIX systems, this is what mlock accomplishes.
Being able to write to memory, in contrast, isn't particularly noteworthy. You've been able to do that since pretty much the beginning...
I'd expect it to be a very minor effect. I'm not aware of anyone getting worried about this.
Got it. With everything else you've explained, that makes sense.
A related effect is convergent evolution. Say two species of bacteria each colonize high temperature environment. Then certain mutations which are favoured in high temperature will likely occur in both of them. When we compare their DNA, this can make it look like they are more closely related than they really are.
Ah, that also makes sense.
I thank you again for the informative responses. You've expertly escorted me up to (or possibly even well past) the edge of my competency.
It's truly a fascinating topic, but for me to really get much more out of it, I think I need to do some homework to learn more about what's already known. There's only so much a generically analytic mind can do w/out learning what's already known in the field.
Thank you for the thoughtful and detailed response. I think I have a better understanding. (But, I'll always stay mindful of the Dunning-Krueger effect...
BTW, this statement captures something I was trying to express more clearly than I stated it:
(Actually it tends to be the other way around - we see islands of conserved sequence, and deduce therefore that they have a function. This isn't how genes are detected, as there are more sensitive gene-specific ways of doing this.)
What I was trying to get at was that if a section of DNA performs some useful function, even if we don't know what it is, it'll tend to be preserved because selection pressure will tend to preserve it by "selecting out" individuals whose mutations tampered with it. The observed long-term mutation rate for any given point should in some sense be inversely related to that point's significance. (More significant => fewer mutations, likely by a function much more stark than just "1/x", where "x" is significance. Key proteins should have a really strong bias to remain unmodified in viable offspring, for example.)
I now have a slightly different question: You mentioned the rate of repeated mutations, where the same piece of DNA was mutated twice or more, sometimes back to its original state. Suppose the environment shifts, such that selection pressure would favor a certain set of mutations to adapt a species to that new environment, and then the environment shifts back. I'm thinking fairly long term, cyclic shifts such as ice ages and the like.
Would such cyclic shifts meaningfully affect the assumptions underlying the multiple mutation rate? You gave this example: "if you compare two sequences and they differ in 10% of sites, it is reasonable to think that 1% of sites have actually mutated twice." I realize you mentioned it was oversimplified. It jibes with a basic knowledge of statistics and statistically independent random variables. I guess what I'm getting at is that cyclic shifts that affect which mutations improve, decrease or are neutral with respect to fitness would imply at least some of the variables aren't independent.
I guess it comes down to what fraction of the mutations actually affect fitness with respect to these cyclic forces. I imagine it's a fairly small proportion relative to the total set of mutations whose fitness effects are completely orthogonal to those long-term cyclic changes. If that's the case, am I correct thinking the effect wouldn't be large?
I guess in general, if the total delta between two samples is still relatively small (10% in your example), any second order effect such as this could only affect that approx 1%, and so that already bounds the potential error from simplifying assumptions anyway.
Again, thank you for your helpful (to my understanding, at least) response.
Let me see if I understand. By measuring over a long period, we're measuring the long term rate of mutation survival after applying selection pressure, and that could be noticeably different than the raw rate of mutation. Is that a correct summary?
Please go slowly with me. I'm an engineer, not a biologist, and I admit biology is not my strongest subject. I am actually curious, though.
I remember hearing that there are large sections of DNA in many living things that are effectively "junk DNA," or at least we think are junk DNA. By junk DNA, I mean DNA that doesn't code for any useful proteins or other molecules, and generally seems to take up space. (I'm skeptical that there is much that is really "junk," but bear with me.) If there really are large stretches of non-useful baggage, it would stand to reason that these sections would not be subject to selection pressure, and so mutations to these sections aren't directly subject to selection pressure, because they don't affect the fitness of the organism.
Is there a way to measure the mutation rates for different sites in the overall genome of a given organism, so that: (a) we can determine if some regions are actually junk because mutations to them do not affect organism fitness, and (b) can distinguish between the rate of mutation and the rate of mutation survival? (This question more or less assumes my understanding in the first paragraph is accurate.)
Yep. The actual secret ingredient (at least for a Flint-style coney, which is my fave) is finely ground beef heart.
BTW, nobody I know in Michigan calls the sauce "chili." If you want a chili dog, by all means have one 'cause they're tasty too, but don't confuse it with a coney dog.
Outside Michigan? Everything that claims to be a Coney dog is just a dang chili dog.
Diplomacy is the art of saying "nice doggy" until you can find a rock.