> Yes.... but I belive that's more about HONORING What you advertise.
> If the printed price they stuck on the goods says "$300" on a $3000 on
> a brand new Macbook pro; they better honor it.
In The Netherlands, the law states that they have to honour the advertized price as long as it can be reasonably assumed not to be an error. With mega-discounts and super-cheap deals for various products the "spot the error" can become difficult. On the other hand, the $300 on the $3000 macbook would be considered an "obvious error".
The $80 for a $800 flight however cannot! The cheap airlines have been selling fights for that kind of rates for ages, so even when an airline that normally doens't do this proposes such a deal, that should be considered "entirely plausible" by the consumer.
The "bends" in the curve they plot are too abrupt. There must be something else going on.
Looking at the original article, they had only about 3500 drives around 2009. That's 4 years ago. So their "4 year" survival rate is not based on the 25000 drives they have now, but only on the 3500 that they had in 2009. With the sharp bends in the curves around 1.5 years and 3 years, I think they significantly changed their buying policy around those moments. Or the manufacturers started shipping them different drives.
How else can the drive "know" that it's been on for 1.5 years? The annual failure rate drops by a factor of four inside a month.
The explanation of the bathtub curve eplains it a bit, the random failures is apparently about 1.4% per year. The initial failure is about 5.1-1.4= 3.7 per year. But instead of the initial failures "tapering off" to "small" values around 1.5 years, they stay constant for 1.5 years, and then suddenly drop to zero. To me this points to something like: "they bought a big batch of drives about 1.5 years ago that has such a high random-failure-rate to pull the average first-1.5-year average up to 5.1%/year".
Do the same analysis 3 months from now, and the "1.5 year bend" moves over to 1.75 years. That's my hypothesis based on the data they publish. Having the underlying data and some time to spare, the current data may debunk or prove my hypothesis already. (e.g. if you run the analysis on the data that is now older than 3 months will, if my hypothesis is correct, show the bend around 1.25 years. If that happens, it makes my hypothesis very likely.....)
What I'm trying to say is: In theory a CPU is fast enough to refresh all pixels within the time of a single frame.
But having a GPU that can do things to the screen while the CPU does other neccessary stuff makes sense. It starts with 2D bitblits.
You don't need a GPU at all. A screen is 2Mpixels. Refreshing that about 60 times per second is enough to create the illusion of fluid motion for most humans. So that's only 120Mpixels per second. Any modern CPU can do that!
Why do you have a GPU? Because it's not enough to just refresh the pixels. You need (for some applications, e.g. gaming) complex 3D calculations to determine which pixels go where. And in complex scenes, it is not known in advance what objects will be visible and which ones (or part) will be obscured by other objecs. So instead of doing the complex calculations to determine what part of what object is visible, it has been shown to be faster to just draw all objects, but to check on drawing each pixel which object is closer, the already drawn object or the currently being drawn object.
In Holland, 100% of the car-cyclist collisions are "caused" by the car. The law was modified to DEFINE it that way. The motorist is ALWAYS responsible.
On the other hand, "poor decisions by the cyclist" is still compatible with: "caused by the motorist". If the cyclist takes more "margin" a traffic violation (would've "caused" the accident) by a motorist will avoid injury.
One of the things I always do while cycling is: if they don't CLEARLY give me the right-of-way that I have, I slow down so that I can stop in time for them without getting hurt. This will COST them time. Because I can't speed up infinitely fast once they HAVE clearly stoped for me (and I might pretend to be speeding up quickly afterwards, while in fact... not..
The comment you're reacting to was using that case as an example for: "should cycling accidents doing off-road or downhill count?". Everybody agrees, i think, that hobby-off-road-cycling should not count towards the "bad rep" for cycling, whereas cycling to work, maybe "on the sidewalk" litterally is "off-road", but should be counted as a cycling-transport-related-incident.
Many people have a screen that's (way) wider than it is high. And they browse "full-screen". I like to multi-task: keeping an eye on other things on my screen besides my browsing. So my browser window will be something like 800x1024 when I'm browsing on a 1280x1024 screen.
Anyway, with that background, having even more "screenspace" that is dedicated to the sidebar is annoying. The sidebar is useful at the top (it has things), but then becomes empty next to most of the articles. So now I have just "half" the screen that's useful, the other half being blank means I bought a big screen with many pixels all for nothing....
I disagree. I'm pretty confident that canonical builds the binaries in a safe way. Even though there is a thirty year old "proof of principle" article on how to hide a backdoor somewhere almost impossible to detect.
The problem is that an average Unix system has many bugs at any one time. Debian and Canonical have given up on updating the kernel on every privilege escalation bug. You might forget to run apt-get upgrade for a few days.
When the NSA needs your data, they will for sure have a big list of things-to-try to get into your system. They will most likely succeed. It's the same with every other directed adversary. He'll be able to get in.
Note that I'm toning down what I read in the media a bit. When the facts state: "NSA was able to hack into some phones", the reports will change from "NSA hacked into phones" into "NSA can hack all phones" and "NSA has access to all data on your phone". They don't. Once they are interested in you, they will try to hack your phone and get what they need.
It's different for internet monitoring. Traffic analysis and automated wiretapping is something they are apparently into. So the big internet providers have automated that and they can't do much to verify the requests they get. So when Snowden says he could tap anyone anywhere, he means he could issue a "wiretap warrent" from his desk, which would reach, say yahoo as: "NSA wants access to the traffic from IP x.y.z.w, you're not allowed to ask why." And then Yahoo might try to fight that a few times, but they have already lost. So nowadays that is processed immediately, and/or automatically.
Moore's law predicts that the "factor-of-33" will be bridged in about 10 years. There is only a factor of 20 to the "peak performance", so about a year before that, peak performance might topple the exabyte "barrier".
(Some people plug in different constants in Moore's law. I use factor-of-1000 for every 20 years. That's 30 every 10, 2 every 2, and about 5 every five. This has never failed me: it always works out).
Why would it be wrong? As long as the answer is "0" (*) it can be quite right!.
I'll tell you a secret: In informal settings, people sometimes say/write "weight" when they mean "mass". Formally not entirely correct, but depending on context, people (like you for instance) can deduce what was meant.
(*) Or whatever the weight of an object in orbit would be according to your definition of "weight".
Some physical quantities make sense if you do 1/ e.g. 1/resistance or 'conductance' make sense in the electronics world. You know that complicated formula for parallel resistors? Forget it.
Resistors in series? Just add them!
conductors (i.e. resistors, but do one over their value) in parallel? Just add them!
Well, if you have the values of the resistors in ohms, you need to convert to 1 over ohms (Siemens) first. If you need the result in ohms, you have to convert back afterwards.
Anyway, with the physical quantity "time" it sometimes makes sense to do one-over-time as well. The unit is called Hz. For example, when things repeat, Hz makes sense. But in other cases, one over time might make sense as well. What I was trying to say is that given enough time, MicroHz isn't always that accurate. Maybe it is the same joke as when the poll says "metric tons" among the many tiny-quantities. Getting "metric ton" accuracy for measuring the weight of the moon would be outstandingly accurate.
If you need "repeat" for something to allow measuring in Hz, there is a bump in the FFT of the kissing-frequency for a lot of guys and their girlfriends. The bump lies at 0.031709 microHz, and is quite sharp. Newyearseve.
I planned my vaction 3 months in advance. The plane was supposed to leave at 13:20. That's about 6 microHz accuracy.
Atoms are about 1 angstrom wide (as a rough estimate. Hydrogen is about this size, uranium is 2 or three times larger). Or about 0.1 nm. So picometers is really really small. Way, way smaller than an atom.
No. I'd gladly pay for a phone that is not 7.6mm thick but say 12mm, but then has a 3x better batery life. It is the autonomy and the "I don't have to remember to charge it up every time" that matters to me.