If I'm reading the article correctly, the information that says that ads in the Facebook style are far more effective than Google's comes from...a study by Facebook. Gee, that seems totally unbiased and could in no way be slanted by them to help them convince potential advertisers to sign up. All of this seems very bizarre after reading -- for years -- about how the Facebook ad model is so deeply flawed.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
I'll admit that I don't use any of those apps, so I can't say -- I would have assumed that they would open the default browsers of the system -- but maybe they do it in-app.
That said, I'd expect the big guys like Twitter or Facebook to upgrade to the newer component for that very reason -- someone gets hacked the user experience will fault Twitter or Facebook (and this case, with some good cause). Still, I hadn't thought of those cases, so maybe that does make this more dangerous than I thought!
Also a point that gets largely glossed over is that this only affects apps that use Webview as a widget -- browser apps like Chrome or Opera aren't affected because they've updated themselves to use Chromium (or something else). This may affect 60% of Android users, but what percentage of those are using the browser inside an app to visit random sketchy websites? I'm guessing the actual user base at risk is quite small.
The way this is reported it sounds like if you use Chrome on anything south of 4.4, you're IN GRAVE MORTAL DANGER OF TEH HACKZ.
I actually heard some good things about it. Not 'This is the End' good, but not far off. I think the problem is that it would entirely disrupt the narrative the poster or writer is trying to convey if The Interview is anything but awful tripe. I doubt it will be winning any Oscars, but I've heard nothing from people who've actually see that suggests that its worse than decent, and it might even be pretty good.
I was a conference, GeoWeb I think it was, in 2008. It was for web-based GIS (Geographic Information Systems), basically cartography & the web. This was maybe a year or two after Google bought out Keyhole, and Michael Jones (I think it was him) from Google was there. Also, Google had just released Chrome so there was a lot of discussion about it. I wanted to pick Jones' brain about some KML eccentricities because I had just written a KML reader & writer. I had to wait behind about five other people who just wanted to talk to him because he was from Google.
One conversation though sticks out. Some guy (who seemed somewhat sycophantic for some reason) was going on & on about how Chrome was going to change the world because it was from Google, and they'd make sure it was awesome and because they could use their influence to make sure everyone used it. I remember that Jones cut him off there (sounding more than a little annoyed) and he told the guy (paraphrasing): "Google can't make anyone use anything we write. The search engine lets us put anything we create in front of their eyes at least once -- that's it. If they try it, it has to live or die on its own merits, we can't force people to try or use it."
I'm going to have to disagree with you as well
My brother started taking photography seriously when he was living in Japan for a year. Within a year he went from being general capable (I can take a picture and that's it) to being fairly expert. Enough that he considered briefly making a living doing photography. He credits a lot of that rapid growth to getting instant feedback. Yes, people just taking pictures willy nilly and & looking at the results by itself does not make for fast skill building. But I would suggest that for someone who is interested in the craft, it is impossible to not see that immediate feedback -- even if you still need to fire the picture up in a power editor to be 100% sure -- versus taking pictures in a black hole and not seeing the results for hours at a minimum is an incredibly faster iterative cycle.
If the same people who grew up fascinated with cameras & photography thirty years ago had the digital cameras of today when they started out, there is no doubt at all that they would have become the experts they eventually became much, much, MUCH faster.
But obviously it has to be something a person cares about and invests the time to learn. Learning about composition, aperture, exposure & white balance are all important things, but they're things you can learn about a hell of a lot faster when you can do it in the field and see theory put into practice before your very eyes.
Isn't that what Seacrest's Typo (currently in litigation with BlackBerry) keyboard/case is for?
Doesn't Google already have this? The Android Device Manager lets you remotely locate, lock or find your device. Is there something more to this 'kill switch'? Does it permanently disable the phone?
The family sharing thing would be very nice to have as a recently married man. I have a large music library (though probably tiny to a lot of people, big to me!) that she'd like to access and it seems silly that we can't just both be able to play from it. Ditto books, movies, etc.
That said, I fully expect that over time this kind of thing will come to everyone on every platform. Microsoft kind of dipped their feet into this during the X-Box One debacle, though what they were talking about is now irrelevant given they dropped that whole aspect of their platform. Apple adding it will speed things along, and it doesn't surprise me that they're first; I imagine this will require some licensing hoop jumping and whatever else I think of Apple, they do seem to have a lot more muscle in that department.
Well, fortunately iOS 8 adds a bunch of things that Android has had forever, so that will help the problem!
It's not a distraction since developers can still use Objective-C as much as they want, and will only switch to Swift if it offers significant advantages.
I'm sure that's how it will start, but they'll lose patience eventually, and probably not all that far down the road. Our Objective-C guy here is already reading up on SWIFT despite our large Objective-C codebase, not because he thinks it'll let him improve anything but because he'll need to know it when they deep six Objective-C in a couple of years. To be fair, anyone who writes Apple software should be used to that at this point, so Apple probably knows that any of their developers with an ounce of common sense will start coming up with a plan on how to move all their stuff to SWIFT over the next few years.
The Montreal Gazette article covers that. They asked a computer security consultant and he said the 24-hour delay was pretty reasonable given the impact taking down the site would have on people given the timing (tax season); not so much that they waited before doing it so much as it was a reasonable time to discuss it and come to a decision. So my guess is that no one will get burned over that.
The real questions are fairly simple: when did the breach occur, and how did they know? Also, how did they know 900 SIN numbers were taken and how do they know more weren't? None of these are necessarily conspiracy-esque questions, but they're relevant. Though it sounds like the CRA may not be at liberty to say anything about some (or any) of that, having been asked by the RCMP not to while they firm up charges.
I agree with you, but I think we really need something other than first-past-the-post to make it really work right. It's great that we have these parties bubbling around as you say, but it'd better if the composition of parliament looked a little more like what popular support says it should. It's a bit weird to have parties with, say, 25% popular support have less than 10% of the house, for example. Still, everytime I get annoyed with our system, I watch the Daily Show or Colbert Report and get reminded how good we actually have it.
I think you're right that perception is a big problem. It would be really, really helpful if there was some objective way to measure how rarely a news source gets stuff wrong. Kind of like a Golden Glove for news. If news organizations could compete for that instead of their version of "First post!", it could only be better.
I completely agree, but you missed my follow-up point. They were already doing that more or less before the Internet era sprung on them (and to be fair, the 24-hour news channel didn't help). The problem is that those that kept doing that start losing ground to those that put the horse before the cart, as you put it. And that happened because we all tuned into the "Latest breaking something-we'll-check-later" News. I'm not saying they're blameless, but we definitely have a huge heaping share of the responsibility.
And I also agree about the obvious party affiliations, but I think a lot of that falls into "Tell people what they want to hear and they'll tune in." That isn't hurting their viewership unfortunately, it's helping. Again, that's largely on us. We should probably be tuning into news sources that offer differing opinions rather than the one we agree with, because when those guys look at the numbers, we're voting with our eyeballs that we *want* political affiliation. It's a case of "we want what isn't actually good for us."