Windows' growth problem is one of image, not technology. None of my techie friends have bothered to even give it a serious look -- they assume it doesn't have the features they want and leave it at that. And I don't think it's their fault. Microsoft hasn't made it interesting to them, and it certainly doesn't have the allure of the Google name.
I don't write mobile apps, but as a dev I like to have a good mix of technology so I've bought high-end Android for my tablets and high-end Windows for my phones. Having used both sides for years, I'm intimately familiar with both systems.
I considered Windows Phone 8 to be a better experience than Android. While there are not as many features, the ones there were well polished and felt more responsive in every day use. It's not a coincidence that they've been able to get away with releasing it on markedly less powerful hardware than the equivalent high-end Android phones. Windows apps, aided by excellent baked-in controls and UX guidelines, generally behave more consistently than Android apps.
Like you, the rare people I've met who do have Windows phones tend to think they're great.
But, that's about to change. Oh boy. My Windows 10 phone has been a different experience. It is buggy, has unacceptable battery life, and responsiveness is all over the place. If anything finally kills it off, this will be it.
In the books the rule of two was indeed an actual thing and was, for the most part, followed and was relatively in place during the time of the prequels. There's a whole book devoted to the creation of the rule.
Until recently this was considered canon, though currently it's anything goes as the new movie decided they aren't going to follow the old Expanded universe.
The thing we were promised in Ep1 that never happened? Anakin brings about balance. But there never was balance.
Given the Sith opted for the "rule of two" -- that there would only ever be a master to wield the power and an apprentice to crave it -- perhaps killing off all Jedi until only Yoda and Obi-wan existed did, in a way, bring balance.
I understand that it's neat to have a deeper understanding of something that most have pretty large misconceptions on, but lets not let that get to our heads. Dismissing people as "indoctrinated" or for any other reason will only hamper us.
Believe me, I've been on your side of the argument more times than I can count. I've been helping with proper Unicode understanding and contributing related fixes to various projects for the better part of a decade. The understanding I've landed on is a simple one: making Unicode support a dogma is not helping anyone. It just leads to more and more complicated bugs. Instead, in almost every case I've observed, making code blind to Unicode is far more robust.
Very few people have a need to interpret their strings as little more than an opaque chunk of bytes. For those people, specialized APIs can exist -- but beating everyone over the head with it by making it the default in the primitive string type is going to hurt way more than it'll help.
Because a single grapheme cluster in Unicode does not fit into a single UTF-32 code unit. If you don't understand the importance of grapheme clusters, try searching this essay for the word "cluster".
To have the Character type represent a full grapheme cluster is... odd.
99% of apps do nothing more than concatenate strings before passing them to some other API. These apps do not need to care about encoding or higher concepts like grapheme clusters. Asking every app to care about them is a major violation of the 80/20 rule.
I don't think that's the case, what's much more likely to be the reason is that they don't want a chicken and egg problem. VR has very little penetration, so there are very few games for it, so why buy a VR headset now? By bundling games with it, you increase its value, perhaps sufficiently so that people will buy it just for the packaged games. It's pretty much what Nintendo did with Wii Sports.
In the case of the Rift, bootstrapping support is something the dev kits were for and accomplished. There are already a fair amount of games that support it, and NVIDIA and AMD already explicitly have VR support libraries for devs to use that help to tie into the Rift and other headsets.
From my perspective, I already own three or four games that have first class Rift support that I can't wait to throw the CV1 at. Increasing the cost of the kit to bundle a bunch of stuff I don't want is a big discouragement.
In the past they've indicated a target price less than $500. I can't help wonder if they've found that to be a tad unrealistic and are bundling games and an xbox one controller as a way to smooth over a higher price tag.
VR is at that stage in its life where it could become the next big thing, but it's not going to take off if the kit costs much more than a traditional high-end gaming monitor.
The benefit that native apps give is consistency. Unless your app goes out of its way to do something weird (and these definitely do exist), it gets consistency for free! The user learns how to use their environment, and this consistency gives them an anchor point for using your app efficiently.
Web apps are the wild west with every site behaving differently in all but the basics, and then sometimes even the basics don't quite work. Even those who try to mimic a native app's look and feel never seem to get it right, leading to frustration as you then expect things to work in a specific way.
Until this can be solved, I don't think web apps are the solution. Even then, web apps tend to use significantly more RAM and CPU, so it's still not going to work where responsiveness has value.
Is that why Windows Mobile 10, or whatever it is now called, got pushed back until 2016?
Pushed back? I'm using it on my phone right now.
AMD actually supported the instructions, but Intel's compilers just pretended that AMD didn't.
I'm not apologizing for Intel because they've definitely got some shady dealings, but if we've got our facts straight, their compiler is not pretending AMD doesn't support SSE.
Intel's compiler does not target instruction capabilities. It targets specific CPU architectures with intimate knowledge of their pipeline. Even if your CPU supports a fancy new instruction, for what you need it for it might perform worse in aggregate than some alternative.
So less about SSE, AVX, etc. and more about Sandy Bridge, Haswell, Skylake. They could make a generic version, but that wouldn't provide any benefit -- GCC, Clang, and VC++ already do that.
You could be involved with your kids and *you* be in charge of who they are communicating with via your playstation
I don't have kids, but every interaction I've ever had with them has taught me that when you're not looking, they're doing everything they can to test their boundaries. Keeping watch over them 24/7 is not a realistic ask.
Regardless, this is not a reason to weaken encryption. If watching what their kids do online is the only concern, a parental control mode that does logging should appease even the most capable of helicopter parents.
One neat thing about LLVM/clang is that theoretically you could provide your application as IR files, which can be flattened to machine-specfic code on the fly by a platform-specific backend.
Is this a relatively new feature? The last time I looked at LLVM IR for this purpose, there was no general "pointer-sized" data type that could accomplish this. It would be really cool if this were no longer the case.
They laughed at Einstein. They laughed at the Wright Brothers. But they also laughed at Bozo the Clown. -- Carl Sagan