Just adding my 2 cents. I agree with you that crashing is never acceptable (as in you have something broken), but a forced exit out of the program through an assertion is the most sane thing to do when the error handling has failed and you have an inconsistent state. The best choice of bad ones so to speak. You should not continue hoping things will turn out good (imagine you were talking about a life critical system), but just exit and avoid further damage and restart the program. You might of course be able to write code to take you out of the situation and fix itself, but that is not assertions then, it is error handling.
There is an inherent flaw in this thinking, and this flaw also shows us why large powerful governments are a bad idea,. That flaw is trust, or more specifically, trust in a single entity. Think about it. Almost every malware attack vector starts with exploiting a common point of trust (eg. You trust java or flash or your browser). When trust is centralized, the baddies only need to focus their efforts on subverting that single point. This is true in both government, and information security. My point is, creating an eco system that relies on a central point of trust is setting us up for failure. (sorry for typos, using a tablet)
While I absolutely agree with you that a single authority is a dangerous thing to have, what is even worse is to mix different levels of trust. That is what we have have been living with up to this point. There hasn't really been any restrictions on what applications can do on the system in the context of the user running it. It takes only one malicious or badly written software to compromise the security of your whole system. By sandboxing the different pieces of software in the system, the security of the whole system would no longer be equal to the security of the lowest common denominator. (Up to this point, I rarely install any software simply because I have no control or assurance what it is doing on the system. With iOS I felt for the first time somewhat confident to install 3rd party apps from developers I never heard about)
I'm not saying that there aren't any issues. I just think that the security benefits overweigh the downsides of a more controlled environment. From a technological standpoint, this is absolutely the way to go in the consumer market. If this leads to some applications getting rejected, it is not a technological problem. It just means that it needs to be solved in some other way. For instance, allowing users to install root certificates for 3rd party "app stores". This could be the case for instance how MacPorts and other other package management systems would work in the future.
PS. It was probably also a smart move to deny emulators in iOS. I'm already somewhat skeptical about games because of concern for battery life. Running something inside emulators does not sound good until we have phones running on supercapacitors or some better power source.
The way I see this is going, this might be the case by default. Typical users get their software through Apple. Controls the user experience by denying applications they don't want for their users for whatever reason. On the upside users get safer downloads and applications with at least some level of quality. The fact that applications are being sandboxed and what they can do are controlled by "entitlements" given by Apple will eventually increase the security of OS X. Too long has the access rights of a process equaled the access rights of the user. Whitelisting applications will be much more effective than blacklisting (= virus scanners). I'm not quite sure why most people see this as a bad thing.
For the users on the other side of the spectrum, e.g. developers, I would not worry too much. Unlike iOS, OS X is being used to create applications. Software just don't magically appear in their final form on the doorstep of Apple. You may need to sign your software before being able to run it, but the option will be there. But why should this be enabled by default? Most people will never touch the code.
I would like to compare Battlefield Bad Company and Battlefield Bad Company 2. The first game had a singleplayer consisting of short clips of what is going on. Otherwise you'd be quite freely running/driving/flying around, do whatever you want with plenty of routes to choose. You could drive straight into an enemy base, or you could avoid the base entirely, maybe snipe a few guys along the way. The game has lots of replay value for this reason; nobody dictates how to play it.
In Bad Company 2 however, you're not given any choice. The game is trying to give this "cinematic" experience, and it is totally boring. There is only one path to move forward across and the experience is "dumbed" down to be the same for everyone. For instance, there was that one place where a burning guy was running towards you. Not so impressive, because I did not do anything to make that happen (and I actually had seen it before in the trailer). It always happens. An other example was a place in single player where there was just one narrow route forward, no cover. It was so obvious that there would be an ambush there. I would have tried to flank, but as there were no alternative routes I threw some grenades on the route forward and got some kills. After that I continued, and had one of the AI squad mates shout "ambush!"... Yeah, nice except I already killed all the enemies. Total mood killer.
Everybody plays differently and me for instance, I always try to take the non-obvious paths (the ones without the ambush). Cinematic experiences hardly ever work the right way if you play like this. And even if they do, there is no replay value. I don't think real cinematic experience comes from having some predefined animations or events that occur when you stumble upon them. Cinematic experience comes when some totally random stuff occurs, would it be single- or multiplayer. It is like having an RPG fly very close by or managing to take cover from a tank... And these things just happen. They are never scripted.
I think that my main point is that everybody builds their own experience, and should come up with own goals rather than have the game developer decide how you should play.
My guess is that the data is fetched to the phone when other means of positioning fails. This data is probably not your location, but the location of nearby Wifi hotspots. By using the nearby Wifi hotspot locations the phone still approximates your location, which is ofcourse neat. According to the update in the article, Android phones would seem to do the same.
Buffering data on the device makes sense. Downloading it every time you visit a location be much bigger privacy issue. Ofcourse downloading it in the first place would reveal your approximate position to Apple (or is it Google?). In my opinion, there is two things that could be improved: 1) disabling of Wifi hotspot positioning entirely and 2) expiration of data (shorter, if there already is expiration) of maybe one month to a couple of months.
I don't have an iPhone so I have not analyzed any data, but this would seem logical to me. My bets are that this is not some evil scheme to "track your every move", so calm down.
Okay, so I got a few replies basically asking what is the difference in having these points at 0 and 100, instead of 32 and 212 (with Fahrenheit) or for example 0 and 3 with "Geekoid scale", so here is a one.
One degree change in Celsius equals one degree change in the SI-unit Kelvin. Only the reference point is at a different place (absolute zero). Thus conversion between Celsius and Kelvin is as easy as subtracting or adding 273,15 depending on which way you want to go. With Fahrenheit or the "Geekoid scale" you need to do multiplication as well.
At the end of the day, we could put these points wherever we'd like and it would be okay. In practice, I think that there are demand for two common units. One for scientific use and one for everyday use. Which fits Kelvin better? Celsius, which differs only in reference point and has a scale that makes sense in every day life? Or Fahrenheit, which requires scaling and where 0 F really does not matter in everyday life. If you want to measure fever, the zero point could as well have been put to 100 Fahrenheit.
I'd pick Celsius. But hey, I admit that I am biased.
Basically, I say that they are both roughly as useful (but celsius in far more used).
The advantage of Celsius over Fahrenheit is that it is bound to two very useful points: the temperature water freezes (0 Celsius) and the temperature water boils (100 Celsius). These can be used to predict things more easily, like is there a risk for icing on the roads, how does the frozen lake/sea hold weight and so on. Or just to know if a surface is above 100 degrees. Put some water on it and see if it vaporizes.
... It's compromised. Fortunately, your IT guy is on the ball. At 11am the next day, you get a call from your network admin asking you if you are signed into the VPN because he expects that you're in the office, but you also appear to be signed in remotely. You confirm that you are not signed in and the two of you realize that you've been hacked. He temporarily disables your access. You go home, clean up your home computer (assuming that you can) or bring it in to have them clean it up, and then it's time to give you access back.
Now here's where things diverge. If you've used a password, you just have to change your password to a new one, and it's secure again. Your fingerprint isn't changeable.
I have not used biometrics and aren't any expert on the matter, but I think there is a obvious solution to this problem: biometrics should only be used for authentication on the local side. Successful local authentication would authenticate local user remotely using public-key cryptography. In this case, if the account get compromised, all you need to do is generate a new pair of keys to a clean computer and you're secure again.
I am shocked that there is a bunker under the US Vice Presidents residence. Here in Finland all bigger structures require bunkers by law. Quote from Wikipedia:
Finland has over 40,000 air-raid shelters which can house 3.8 million persons (71% of the population). Private homes rarely have them, but houses over 600 square meters are obligated to build them. Fire inspectors check the shelters every 10 years and flaws have to be repaired or corrected as soon as possible. The law requires that inhabitants of apartment blocks can clear the shelters and put them into action in less than 24 hours. Also, the shelters must possess a working phone line connection that must be usable at all times.
If 2% of the votes were lost, how many were incorrect or not registered properly? If the system can lose votes, it can very easily put them for the wrong person as well...
As far as I know, the reason why votes were lost was that the voting system had a very bad UI. For the vote to be registered, you had to push an OK-button more than once *) - something that wasn't that apparent, and which all users did not understand to do. Also, when then removing the voting card from the machine, no indication was given if the vote was registered or not. This caused votes to be lost simply by a bad UI design, which could be fixed later on.
*) Having a confirm button is good, but the system should in that case clearly indicate that the voting is still in progress.
There's no standard way to control a device from a standard headphone jack
Sounds like a good argument to develop a standard rather than applaud this bad behaviour.
A reality check please; companies rather push out new products immediately, than argue about some random feature in a standards committee. In fact, it is much more in the interest of the company to keep shut about this kind of features.
But don't get me wrong. I think standards are good and crucial to the whole business. It is just that there should not be any expectation on the companies for developing these standards.