...is that our selfies are safer.
Because if it is, you need to pull your head out of your ass and go and do some extremely basic, cursory, research on the situation in the US. There are for sure some loud fundy Christian that like to whine about science, evolution in particular. However they have had little and less success in pushing their agenda and the US remains a powerful center of scientific research.
Trying to equate the US to ISIS is beyond stupid.
I keep thinking that if an ISP really wanted to cut costs, they could proactively monitor their network for problems:
- Provide the CPE preconfigured, at no additional cost to the customer. (Build the hardware cost into the price of service.)
- Ensure that the CPE keeps a persistent capacitor-backed log across reboots. If the reboot was caused by anything other than the customer yanking the cord out of the wall or a power outage, send that failure info upstream. Upon multiple failures in less than a few weeks, assume that the customer's CPE is failing, and call the customer with a robocall to tell them that you're mailing them new CPE to improve the quality of their service.
- Detect frequent disconnects and reconnects, monitor the line for high error rates, etc. and when you see this happening, treat it the same way you treat a CPE failure.
- If the new hardware behaves the same way, silently schedule a truck roll to fix the lines.
If done correctly (and if clearly advertised by the ISP so that users would know that they didn't need to call to report any outages), it would eliminate the need for all customer service except for billing, and a decent online billing system could significantly reduce the need for that as well.
They won't see people switching to Swift uniformly. There are trillions of lines of code written in Objective-C, and programmers already know it and are comfortable with it. There are no tools for migrating code from Objective-C to Swift, much less the hodgepodge of mixed C, Objective-C, and sometimes C++ that quite frequently occurs in real-world apps, so for the foreseeable future, you'd end up just adding Swift to your existing apps, which means you now have three or four languages mixed in one app instead of two or three, and now one of them looks completely different than the others. I just don't see very many developers seriously considering adopting Swift without a robust translator tool in place.
I do, however, expect to see Swift become the language of choice for new programmers who are coming from scripting languages like Python and Ruby, because it is more like what they're used to. In the long term, they'll outnumber the Objective-C developers, but the big, expensive apps will still mostly be written in Objective-C, simply because most of them will be new versions of apps that already exist.
BTW, Apple never really treated Java like a first-class citizen; it was always a half-hearted bolt-on language. My gut says that they added Java support under the belief that more developers knew Java than Objective-C, so it would attract developers to the platform faster. In practice, however, almost nobody ever really adopted it, so it withered on the vine. Since then, they have shipped and subsequently dropped bridges for both Ruby and Python.
Any implication that Swift will supplant Objective-C like Objective-C supplanted Java requires revisionist history. Objective-C supplanted C, not Java. Java was never even in the running. And Objective-C still hasn't supplanted C. You'll still find tons of application code for OS X written in C even after nearly a decade and a half of Apple encouraging developers to move away from C and towards Objective-C. (Mind you, most of the UI code is in Objective-C at this point.) And that's when moving to a language that's close enough to C that you don't have to retrain all your programmers.
Compared with the C to Objective-C transition, any transition from Objective-C to Swift is likely to occur at a speed that can only be described as glacial. IMO, unless Apple miraculously makes the translation process nearly painless, they'll be lucky to be able to get rid of Objective C significantly before the dawn of the next century. I just don't see it happening, for precisely the same reason that nine years after Rails, there are still a couple orders of magnitude more websites built with PHP. If a language doesn't cause insane amounts of pain (e.g. Perl), people are reluctant to leave it and rewrite everything in another language just to obtain a marginal improvement in programmer comfort.
Obj-C isn't any better than C in my opinion. But, to each their own.
It is if you're doing any nontrivial amount of string manipulation.
No, they're saying Apple switched because GCC's core wasn't designed in a way that made it easy to extend the Objective-C bits in the way that Apple wanted. And that could well be part of it—I'm not sure.
But I think a bigger reason was that Apple could use Clang to make Xcode better, whereas GCC's parsing libraries were A. pretty tightly coupled to GCC (making it technically difficult to reuse them) and B. licensed under a license that made linking them into non-open-source software problematic at best.
And you know what Mojang's opinion means at this point? Absolutely NOTHING. They can't tell their new owner to honor their intended promises, even if it were written into the deal. All they have to do is replace the boss with someone willing to change the company on Microsoft's behalf and POOF! It's happened with every other developer that's been bought out thus far that came out and said they were told/promised nothing would be changing.
Depends on how good their lawyers are. If they write into the contract a term that says that all rights revert to the original authors if the new owner violates such a term, then yes, they can force the new owners to honor those promises.
Neither am I willing to take the word of some random dude on the Internet. Barring any more proof, I don't think we should be putting any stock in this.
You can to think about that. So it doesn't prevent gun suicides. The fact aside that someone can commit suicide with something else, the person doing it would be an authorized user of the gun. So no help there.
It doesn't prevent gun homicides. Again, these are done by authorized users of the gun, or people who have time to modify the gun. Remember for all the clever electronics, in the end guns are mechanical devices. So ultimately the electronics have to be something that mechanically disables the gun like a standard mechanical safety. A trigger disconnect, a firing pin block, that kind of thing. Ya well those are dead simple to bypass. So no help for stolen guns, the criminals would just remove the safety.
It doesn't prevent accidental shooting by any authorized user of the gun. Since they are authorized, it will fire. So any drunken games, etc, are still just as dangerous as they were before.
Already here we have, by far, most of the shootings that happen.
It may not prevent shooting where a gun is taken away from someone. Depends on how it works. If it has some way of reading the fingerprint when the trigger is depressed, then ok it could work. However if it works like a safety where you disengage it when you grab the gun, it'll still be disengaged if someone takes it away.
It would prevent accidental shootings where an unauthorized user gets their hands on the gun, like a kid coming across it.
Ok well, that doesn't seem very useful to me. The correct answer to the problem of kids is to lock up your guns. That is much more secure, particularly since something like this would only be effective if you didn't authorize you kids to use it, or remembered to remove their authorization when they were done at the range. Having them secured in a safe fixes the problem nicely. Likewise, that provides pretty good protection against theft.
So I really don't see what this will solve, and it will make things more expensive and complicated. It just doesn't strike me as very useful.
Okay, here's a link to the research paper from 2012.
Ebola may not be easy to transmit, but it sure as heck isn't hard to transmit. It's not pedantically known to be airborne, but it is believed to be spread by droplets (e.g. sneezes). There's a very, very, very fine line between the two.
And yes, I can provide citations if you'd like, but it's not like they're very hard to find with a Google search.
Apple's done a lot of work with Grand Central Dispatch (is that the right technology?) to help developers offload as much as possible to the GPU
You're probably thinking of OpenCL. GCD is a pipelining engine for enqueuing work.
Yeah, I was about to say that the reasoning behind the decision was shrouded in mystery, but same idea. Oh well.
Maybe 6-10 hours of staff time. What I mean is you have to factor what your people cost you. If someone costs $50/hour when you count in salary + ERE (meaning payroll tax, benefits, insurance and all other expenses) then 6 hours of their time costs $300. So, if your transition wastes more than 6 hours of their time, it is a net loss.
You always have to keep that cost in mind when you talk about anything: What does it cost your employees to do? This is the same deal with old hardware. It can actually cost you more money, because it takes more IT time to support. Like if you have an IT guy whose salary + ERE is $30/hour and you have them spend 20 hours a year repairing and maintaining an old P4 system that keeps failing, well that is a huge waste as that $600 could have easily bought a new system that would work better and take up little, if any, of their time.
That is a reason commercial software wins out in some cases. It isn't that you cannot do something without it, just that it saves more staff time than it costs. That's why places will pay for things like iDRAC or other lights-out management, remote KVMs, and so on. They cost a lot but the time they save in maintenance can easily exceed their cost.
Just remember that unless employees are paid very poorly, $300 isn't a lot of time. So you want to analyze how much time your new system will cost (all new systems will cost some time in transition if nothing else) and make sure it is worth it.
Then you've never worked in an enterprise environment that uses it. You'll have a ton of tech support and maintenance costs with Linux. You not only have all the regular user shit, people who can't figure out how to use their computer, administrative stuff, etc. However I've also observed that a good bit of the stuff in Linux requires a lot of sysadmin work, scripting and such. We do Linux and Windows in our environment and we certainly make Linux work on a large enterprise scale, but our Linux lead spends an awful lot of time messing with puppet, shell scripts, and so on to make it all happen. A lot more than we spend with AD and group policy to make similar things happen in Windows.
Licensing savings are certainly something you can talk about savings for, however you aren't getting out of support and maintenance. That is just part of running an enterprise. The question is what would their costs be, compared to Windows? that is likely to vary per environment.