Sounds about right. And the outdoor hose is a good analogy.
But you went off topic, the question is, if you were not there and police saw that stranger and arrested him for theft (without even asking you wanted to press charges), would you think that was OK or perhaps "a bit" too much?
Sounds about right. And the outdoor hose is a good analogy.
I switched from an N9 to a Galaxy S3 about a year ago (because the N9 lacked some apps I needed - thanks to Nokia abandoning it and alienating developers) and I still think the N9 was a much superior experience to both my Galaxy and my company-issued iPhone.
I' ll keep an eye for this. Hopefully if it catches on it might get a lower price-tag (given that it doesn't use very expensive hardware). The hardware does not seem very high-end, but the native apps are fast (the single-core N9 seemed faster than dual-core Android phones). Plus you get to run Android apps, if they run without problems this should allow people like me who had to switch to Android for the apps to get the phone.
One thing I don't like that much is the IPS screen. I don't mind it has a lower resolution than the current flagship phones, but I would prefer the S-AMOLED that the N9 had (with an always-on clock that did not use almost any battery power!).
Oh, there is also some talk that they will develop replace-able backs, e.g. you will be able to remove the back cover and put in a slide-out qwerty keyboard N900/950 style.
So, keeping an eye out for this, if it is really better than the N9, it could be the phone to have.
I also need to work primarily on OS X (and I also use some windows programs in an XP VM). My Mac Pro is great at home, but a few years ago I tried a Macbook on the go and I absolutely hated it for various reasons. I didn't want to spend too much since I am not frequently working away from the Mac Pro, so as an experiment I tried a $150 MSI netbook for which there were instructions to install OS X. It wasn't bad and for the rare trips it was ok to work on, but it was a bit of a hassle to set up (and get everything working) and then update, so when I wanted to get something better and faster recently, I was back to the same dilemma - pony up for an Apple that I won't really enjoy for its price (I also hate glossy), or get a non-apple and waste time setting it up, updating it etc. Then I thought maybe I am going at this all wrong. Why not OS X VM on a very fast Windows host, i.e. the opposite of my usual setup? So, for a little over $500 I got a 2-year old Thinkpad X220 with an i7, 8GB RAM and 160GB SSD, which turned out to be an amazing machine (I highly recommend it if you want a 12.5" ultra-book type), built like a tank yet very light, at a fraction of the cost of an equivalent Mac, and it runs an OS X Mavericks VM very comfortably (on VMWare Player with the OS X client patch applied).
Now, if you don't like Windows at all, I guess it would not be the best solution to have it just to launch a VM, but if you use both like me and you'd probably have Windows under Fusion anyway, it is a solution worth exploring. There are some great business laptops like the Thinkpads, which you can even get at a bargain since they depreciate like everything else but Apple devices. In my case a $2500 laptop was $500 a little over 2 years later, still in warranty. Just make sure you have an SSD (and plenty of RAM) if you want to have a VM running fast.
It would indeed be much better to peg salaries simply to performance. I mean, that is the capitalist thing to do, right? A movie star should get paid well if the movie did well at the box office. A CEO should also get paid well if the company does well. He shouldn't just earn $25 million for destroying and selling off the #1 mobile phone maker. And I haven't even touched the CEOs involved in the recent recession...
In many cases the US government has specifically intervened to not allow capitalism/free-market to works as it should. You see, if a company goes after the maximum risk and fails taking down the economy with it, free market is supposed to let the company die and be an example, so that more efficient companies can replace it and be aware of the risks. Instead, the government bails out the company and the CEO gets a golden parachute, which pretty much breaks the free market model.
No, you don't get it.
All volume controls go to 10.
Nigel's go to 11. They are one louder.
Similarly, all safety ratings go to 5.
Tesla's goes to 5.4, it is
Don't you just hate it when the summary is so useless that you actually have to RTFA (or, more realistically, skim through it)? Fuel cells can mean natural gas, gasoline, diesel etc, all of these significantly less interesting since they have been powering cars for 100+ years and done so by converting chemical energy directly to mechanical energy, without going through the electricity step. But hydrogen is interesting. And finally some competition for Tesla - let's see, what happens to a hydrogen fuel cell when you hit debris on the road!
Actually the clock speed for the 862GFLOPS figure is in the footnotes, see here: http://images.anandtech.com/doci/7507/amd_kaveri_specs-100068009-orig.png
So, even unintentionally, they are talking about clock speeds...
Ah, it reminds me when I was in 8th grade and I mathematically proved the existence of alien life by means of a brilliant modification to Drake's equations, but then Aliens came and convinced me to not let the world know, since the world was not ready. Anal probes might have been involved in the process.
Why be a smart-ass and not recommend one yourself? I mean a lot of us here waste enough time with the likes of slashdot that we don't have time for reading major tech sites etc.
So it was not more than break-even. The gain was actually 0.0077 - 1.8MJ in, 14kJ out. Just a small (i.e. about "1") mistake by the genius journalists.
Not sure if an AC is worth responding to (esp. one that sounds like a dick), but here goes.
Dude, you are being ridiculous. A few quick examples...
1. Your comments re: hyperthreading arbitrarily exclude the possibility that the settings they used were okay. Hyperthreading on Pentium 4 was weird. Sometimes it hurt when you wouldn't expect it to (e.g. 2 threads slower than 1), and when running a single thread it was often neutral.
By definition, hyper-threading adds overhead to single-threaded tasks. At best, the overhead is imperceptible. So, they go and ENABLE HT for SINGLE threaded, where, at best it hurts just a little, but at the same time they DISABLE it for multi-threaded where it might have helped. Yeah, I could buy enabling/disabling for both cases, but enabling specifically for single-threaded means you are trying to hurt performance.
2. Compiler flags -- You are crazy if you believe different compiler flags are 100% equivalent across all combinations of compiler and target CPU. As in, the optimizations made by turning on -fast on Apple's custom G5-targeted GCC almost certainly were different from -ffast-math on a more mainline GCC targeting x86. Different CPUs often have very different IEEE compliance shortcuts, with different performance implications.
I did not say -fast would be exactly the same as -ffast-math. And both settings are not IEEE compliance shortcuts, but exactly the opposite - that's the whole point! So they gave the apple compiler the chance to optimize by relaxing IEEE, while they had Intel run without any such optimization. How is that comparable, you are comparing different tasks.
3. Special malloc() libraries are de rigeur for SPEC benchmarking. Go check a few scores posted to spec.org. Notice how most of them mention MicroQuill SmartHeap? That'd be a commercial memory allocator library which does have some general purpose uses, but is known to be tuned well for SPEC. That kind of SPEC gamesmanship sure isn't pretty, but it's common as dirt.
Again (like talking to a wall) they used a special library ONLY for the G5. Furthermore, according to their own paper, the library they used is "unsuitable for many uses". So it is not a general-purpose library that they might ship a machine with, it is something specific for this benchmark and they only applied it to the G5.
4. How dare they compare the G5 to the biggest commercial competitor. How dare they!!! (Yeah, duh, they'd have lost badly to the Opteron. BFD.)
To be more exact, they compared biggest (not fastest) competitor from the previous year, with their unreleased CPU.
5. The "no shame" thing -- They compared G5 to Pentium 4, then they later compared Core 2 Intel Macs to G5 Macs. THE PENTIUM 4 IS NOT THE SAME THING AS THE CORE 2 YOU IDIOT. If you are at all conversant with developments in x86 CPU performance over the last 10 years, you should know that the Core 2 kinda blew away everything which came before it. The only lack of shame here is you, for making such a dumb statement.
Thank you. You calling me an idiot must be some sort of compliment. Anyway, I guess you don't remember the G5 website with the comparison vs x86. Apart from benchmarks, it was touting all the features that Power PC had over x86, like AltiVec etc, which things the Core architecture did not change. I guess you have to find those documents Apple had made back then to see the irony - but I remember it was very surreal to switch from one Apple.com page to the other. Oh, and the first Intel Macs did not use a Core 2, but a Core Duo, which was about on par with the Athlons of the era.
By the way, in all that ranting, you missed what was by far the most important bit of sandbagging Apple may have done against the P4 in that test: They used gcc. If you look at SPEC submissions for Intel x86 where it's obvious the submitter wanted to post a high score, they almost universally use Intel's ICC compiler. ICC + ICC-specific optimizations are far more important to the final score than anything you've flipped your lid about. I remember some discussion at the time that it was actually kinda cool to see a P4 SPEC run using gcc, since that was a more relevant compiler for a lot of people interested in SPEC scores.
I didn't go into that because some people consider gcc to be fair for comparing architectures. Of course Apple is actively working on gcc, while Intel has its own compiler, so, yes, that was also one of the biggest "cheats" of the paper.
Battery life still behind the iPhone: http://images.anandtech.com/graphs/graph7376/58409.png
You are comparing a phone with a 4 inch screen, with a "phone" that has a 5.7 inch screen. You can't compare battery life when the screen is what uses up most of the power. If you want a huge screen you have to compromise on battery life (and many other things - seriously, the note is ridiculously big to use as an every-day phone).
Browser speed still behind the iPhone: http://images.anandtech.com/graphs/graph7376/58440.png
I don't suppose Samsung can do much about that. It is quite possible that with the same CPU, an Android would still be slower than an iOS device. Sure, Google has made a fast Java VM, but it still is a Java VM, right? For example, I had a Nokia N9 running Meego/Maemo. It could run circles around Android phones with the same CPU.
Graphics performance still behind the iPhone: http://images.anandtech.com/graphs/graph7376/58425.png
Ehm, this result (to which you cleverly linked directly - hiding the context) is ran in native resolution. The Note has almost 3x the iphone's resolution, so it would be pretty strange to come on top in fps. But in all the other GPU benchmarks which are ran at 1080p it does come on top of the iphone.
But in any case I personally prefer a phone that has a good battery life, it can fit in my hand and lets me do whatever I want with it. So that rules out the note and the iphone
Ok, I remember reading the Apple benchmarks myself (in utter disbelief - even for Apple it seemed too much), and this article you linked to does not agree with my memory. So let's go directly to the source. Read that benchmark paper yourself on archive.org : http://web.archive.org/web/20030727103031/http://veritest.com/clients/reports/apple/apple_performance.pdf
I gave it a quick look to refresh my memory and here are some highlights:
- They DISABLE hyper-threading on the SPEC rate test, which is the multi-processor test. Then, they ENABLE hyper-threading on the SPEC base, which is the single-processor test!!! They defend this by saying something like "hyper-threading is slower some times". Well, they sure know that, since they only enable it when it will slow down the Pentium! I would have given them the benefit of doubt if they had disabled (or enabled) it for both tests, but selectively enabling/disabling it means you know what you are doing.
- They use -O3 -fast -ffast when compiling for Apple, which uses fast math non-IEEE optimizations. Of course they had the Intel CPU run accurate/IEEE spec code - there is no equivalent -ffast-math used.
- They go on making some other "crazy" optimizations on the G5 like "modify CPU registers to enable memory Read By-pass", or installing a special malloc library that optimizes for speed by sacrificing memory just for the single-threaded benchmark. This is not how you benchmark for comparison purposes, especially if your optimizations for the competing platform are "turning off update" and "turning off hard drive sleep" (they obviously put that stuff just to pretend they "optimized" there as well).
And I am sure there are other things as well, this was from a quick read. And of course let's not mention that they compare the G5 with an Intel P4 CPU, when, at the time, AMD's Athlons/Opterons (64bit versions were just out as well) were destroying Intel (in performance, not sales - but that is another story).
In general, that paper is so ridiculous that I can't believe Apple had kept promoting it after they had been outed. But then again, given Apple's target audience, the explanation is simple. What was even more ridiculous is that when Apple started selling the Intel-based Mac they had kept for a while the section of their website that showed how much faster the G5 Mac was compared to Intel and then on the Intel Mac pages they had comparisons which showed how the Intel Mac is faster than the G5 Mac. No shame!
More like rule #1, and it is illustrated ingeniously in Mr Plinkett's epic 70-minute Episode I review.
The aforementioned review is also widely accepted as the best thing to come out of the wreck that is SW: Episode I.