Exactly. It's not as if Youtube allows everything else.
There is a lot of very very nasty stuff on the internet and I'm pretty sure most of it isn't allowed on Youtube.
This is true and my pants are now definitely starting to change to a brownish hue. Knowing the currently running app greatly simplifies the task for the classifier.
This possibility and security risk is going to disappear in the next version of Android, but is very present in all current versions:
It's on Youtube too:
They could have filtered out the CbCr noise first, though. (NeatImage does this very effectively)
Granted the sophistication of a finely tuned and well crafted attack would mean even I'd fall for it without being any wiser
Although I agree with you in general, the thing is that you need to think of what the effects of a false positive are. Imagine starting up your game of solitaire and then seeing a Gmail-like login window. Because that is what could very well happen and would set off alarm bells in a fairly large set of users.
I suppose you could try to mitigate that by using a generic enough login window and only firing the phishing attack when the model is almost 100% confident that a login window is appropriate. After all, if you can have your app run in the background for several months (or longer), you can afford having it bide its time and wait for the perfect opportunity.
The question then becomes how confident the model can really be. Various methods would probably have to be included to boost the confidence. Checking which apps are installed and only attacking devices that have a pretty default set of FacebookGmailWhatsappCandyCrush apps installed would mitigate the issue of having to deal with colliding signatures of unmodeled apps.
The attack app could even retrieve collect a list of processes ran on the device and/or installed apps, the device type, Android version etc., and then request a classifier from a server, if one exists for that combo. Perhaps different versions of apps could still pose a problem, though.
In addition to the classifier, the app could also retrieve the tuning parameters for that specific device/Android version from a server.
Hmm. It seems a turtle head actually is starting to poke out.
Its a very powerful attack vector
Yes and no.
I'd like to point out that the authors have only used the attack on Galaxy S3 devices running Android 4.2, for a very specific set of apps.
"We run all experiments on Samsung Galaxy S3 devices with Android 4.2. We do not make use of any device-specific features and expect our findings to apply to other Android phones."
Basically, they use the following (world-readable) elements to generate signatures of certain Activities (parts of apps) starting up.:
- CPU usage pattern
- Network usage pattern
- Increase and decrease of the shared memory (where the graphics buffer of the window compositor resides)
(they use more elements, but these are their most important ones: "Thus, the CPU utilization time, the network event and the transition model are the threemost important contributors to the final accuracy. Note that though the Content Provider and input method features have lower contributions, we find that the top 2 and top 3 candidates’ accuracies benefit more from them. This is because they are more stable features, and greatly reduce the cases with extremely poor results due to the high variance in the CPU utilization time and the network features.")
For the apps mentioned, they collect this data for a large number of the same Activities starting up. They average the results (model it using a normal distribution) and use that data as input for an offline machine learning step in which a model is generated.
On the 'hacked' device itself, they can then use the live data in their classifier and predict which Activity is starting up. When a specific target Activity is started up, they immediately start up their own mockup Activity and destroy it after the data has been entered, returning the user to the previous Activity with a misleading 'Server error' dialog in between. This method is what allows the injection to work without requiring the 'draw over other apps'-permission.
Now, anyone who has experience with machine learning can see how these results may not generalise very well, given that they used only a specific set of apps on a specific device. Choosing between 100 alternative Activities is a lot easier than choosing between the millions of Activities out there. How many signature collisions (false positives) would that lead to? A lot.
That is exacerbated by the fact that different users run different sets of apps in the background, which obviously greatly influences the CPU usage signatures and network signatures.
Besides that, the signatures are device and probably Android version specific, leading a model for many devices to become prohibitively large to be distributed in a single app. Of course, this can be mitigated by just targeting one specific very popular device (such as one of the Samsung flagship models).
Their injection of the activity is also something to look at again. Consider this:
"Note that the challenge here is that this introduces a race condition where the injected phishing Activity might enter the foreground too early or too late, causing visual disruption (e.g., broken animation). With carefully designed timing, we prepare the injection at the perfect time without any human-observable glitches during the transition (see video demos )."
Everybody knows that 'carefully designed timing' and generalisable match very poorly. Targeting one specific device may indeed work here, but I think some testing in more varied scenarios is required before we all shit our pants.
Although real time constraints play a part, the main benefit of hierarchies is specialization.
Every decision or action requires a specific skill set and a group as a whole becomes more efficient for every task that is performed by individuals dedicated to that task, simply because those individuals become proficient at it.
Making high-level decisions is also just a task in which someone can become proficient. The problem that we see today with such tasks is that there is a lot of competition for them (which leads to a certain type of individual taking those positions, not because they are fit for the task, but because they compete well in being assigned the task) and that these tasks give a disproportionate amount of power and influence. That combination is toxic.
Exactly. See here for some ideas as to what factors could influence or have definitely influenced prosperity and culture:
Nothing lasts forever. Now say something relevant.
Or just use battery packs near the solar panels. Problem solved.
Someone who doesn't cheat for $6 might cheat for $10k, but someone who will cheat for $6 will almost certainly cheat for any larger value.
Someone who will cheat for $6 can rationalize it by saying "everybody does this; it's only $6". In fact, the lower the amount, the less anyone would feel like they did something amoral. Which is exactly the opposite of what you implied.
The 'everybody does this' part is probably a huge factor in this research.
Exactly. It's not about 'not wanting to be alone with your thoughts', but about curiosity and obedience.
I thoroughly enjoy my thinking sessions, but:
1. I do so when I feel like it, instead of when being told to.
2. If there's a button in the room, I'm damn well going to press it. There's an obligatory xkcd somewhere below this comment that says it all.
1. Most people in Germany do not have their own house, but live in rented apartments. They have no possibility to install any kind of power generator, renewable or not.
That is not really true. One of the things that is becoming more common is for the housing corporations to create projects where the renters pay an additional fee for using power from solar panels the corporations install. There are variants when it comes to the type of payment and ownership, but the general construction is quite viable. Basically, renters get to bet that their fees for the solar panels will be lower than what they would pay in electricity costs, feel good about supporting solar and have to do nothing otherwise. The housing corporations can (technically) provide better panels and prices due to the scale advantages.
It's obviously not a panacea, considering that housing corporations could really mess up their choices or try to become rich off of the projects, but in a way it is a much faster way to increase the number of installed solar panels than waiting for home owners to take the plunge.
AFAIK, GoT wasn't filmed at 60fps. Even if the broadcast format is 1080x60/30, it is just displaying (~)24fps using pulldown techniques.
Having said that, the rest of your comment is accurate. There is plenty of true 50/60fps material out there.
Or, or: CTRL+R and keyup.
If you take away the mouse in this newfangled interface, I bet CTRL+R and keyup require fewer keystrokes on average than moving the cursor to the command you want to re-run. Granted, CTRL+R and keyup could be slightly less destructive in certain cases, but other than that they're pretty much perfect.