Apple spends serious coin on Research and Development; far more than their competition.
This is almost true, though the vast majority of Apple's R&D funding is firmly at the D end of the spectrum. IBM used to spend a lot more than Apple on research, though they've cut down a lot. Microsoft still does (around $5bn/year on MSR). These companies and Google (and Oracle, and so on) all throw grants at universities for research, which Apple doesn't. It wasn't until last the last few months that Apple even published any of their research.
Maybe, depends on amplitude of the blowtorching towers; keeping in mind inverse square law. In addition, 5Ghz (and higher frequencies) don't penetrate solid objects nearly as well as 2.4Ghz and below. Yet paradoxically 5Ghz is better in a home/office environment over 2.4Ghz because the SNR is much better from lack surrounding interference.
In this respect, it's not really any different from stuff genetic algorithms have been doing for decades. If you have a set of executable tests that can tell if the algorithm is working correctly, then you can evolve something that will pass the tests. Of course, you have absolutely no idea how it will behave on inputs not covered by your tests.
Sometimes. Apple already has their 1 Infinite Loop building and then most of the office buildings nearby along De Anza and a few nearer the middle of town. They're pretty short on space. It makes sense for them to be building a new big building, and the cost difference between building a new boring building and a new shiny building is pretty small. This will let them move a bunch of people who need to collaborate into offices near each other, rather than having them spread across the various De Anza buildings.
From what people were saying when I was at Apple a couple of weeks ago, it's actually coming a bit too late. The company has grown faster than they expected when they started planning and so rather than being able to move everyone from De Anza into IL2, they're having to identify sets of people who need to collaborate and move them, leaving quite a few behind in De Anza. If your company is growing faster than your ability to build office space to house them, that's generally a good sign (though the insane planning permission situation in the Bay Area means that it happens there a lot more often than you'd expect).
Hololens is not VR
Indeed. AR doesn't seem to trigger the same motion sickness responses as VR, because you retain all of the visual cues from the real world.
Microsoft is once again creating a product that nobody will use.
Microsoft has created a technology that anyone can use without feeling motion sick, but you think that it will lose in the marketplace to one that about 80% of people can use without feeling motion sick? That's an interesting perspective.
It's not so clear with 3D. It's something of a misnomer to call current displays 2D and this kind of VR interface 3D. Both provide a subset of the dozen or so cues that the human brain uses to turn inputs into a 3D mental model. They both, for example, manage occlusion and distance blurring, but neither manages (yet) to correctly adjust the focal depth of parts of the image that are further away. Motion sickness is caused by disagreements between some of these cues and between the other cues that you use to build your mental model of the world. VR adjusts the image based on your head position (though latency here can cause problems as the visual signal and the inner ear signal come at different times), but it turns out that humans have a very strong notion of body image, so if they don't correctly track your arm positions and update them in the game then this causes nausea in a lot of people.
Unfortunately for the 3D film and game industry, it's not the case that simply adding more cues reduces the risk of motion sickness. It turns out that a third-person perspective on a 2D display is one of the minima for the percentage of the population to experience motion sickness. Move to first person, and this gets worse, though it's still a tiny percentage (some people can't play FPS games for more than a few minutes without feeling sick). Add a few more visual cues and you get a lot more people feeling sick. There's obviously a minimum when you get all of the cues right, because otherwise people would spend their entire lives experiencing motion sickness, but so far none of the mainstream 3D systems have found another point that's close to the 2D display. If you're going to develop a first-person game, and you can either develop it for a technology that 99% of humans can use without feeling sick, or spending more money to develop it for a technology that 80% can use, which would you do?
const char s = "hello world";
This will copy the contents of the string literal into a mutable array. If you write this at the global scope, the copy will be done at compile time, so you'll end up with the string in the data section, not the rodata section (if you do it in a variable with automatic storage, you'll get a copy every time the variable comes into scope). Putting constant strings in the rodata section is important for optimisation, because it allows them to be coalesced. If you write "hello world" in two place, then you'll end up with a single string in the rodata section. With some linkers, if you also write "world" somewhere else, then you'll just get two pointers into the same string (this is also one of the reasons that C uses null-terminated strings: you can't do this with Pascal strings, and it saved a useful amount of memory on the PDP-11). Once you're sharing the string, it becomes a really bad idea to allow someone to modify it, because that modification will then become visible in a different bit of code.
FORTUNE'S FUN FACTS TO KNOW AND TELL: A guinea pig is not from Guinea but a rodent from South America.