Minor note: It's SIV, not SHIV. SHIV would stand for Simian Human Immunodeficiency Virus.
Well, that certainly makes me look stupid. Consider me fooled and schooled.
Thanks for the correction!
I just bought and installed an iiyama B2888UHSU-B1 for ~EUR500. It runs great at 60Hz over displayport 1.2 on an AMD7950. It's a TN-panel (by CMO, which apparently is used in most of the 4k monitors at this price point), but it performs quite well in the color department, according to proper tests ( http://nl.hardware.info/tv/802... - Dutch, but the tables shown at certain points in the video should be intelligible).
The 7950 drives an extra monitor over HDMI (1080p@60Hz) simultaneously without problems.
1. Using HDMI, you are limited to 30Hz, which is definitely noticeable in daily use.
2. 28" 4k is for people with great eyesight (which I thankfully still have). I'd say 32" is the minimum size for people with average eyesight. This is when using Windows 7, in which dpi scaling is pretty much hit and miss to the point where it is almost useless.
3. 4k TVs are probably going to be bad for gaming, due to input lag etc.
4. Gaming on 4k requires me to kick in my second 7950 in crossfire and even then we're talking around 30 fps in modern games. Anti-aliasing is not necessary, though.
5. Windows 7+AMD drivers+Displayport is a pain in the ass. If I turn off my main display with the power button on the monitor and turn it on again, the display is removed and added again, leading Windows to take a minute to completely mess up the positions of all the windows on all my monitors. Any tips on how to prevent this behavior (beyond what is found on the first 20 hits for 'windows 7 disable automatic display detection displayport') would be most welcome.
All in all, I am very happy with my purchase. Photo's with enough resolution look fantastic. 4k video is amazing to look at (there are a number of clips on Youtube that you want to download using some downloading extension/service - at least, if you want them to play properly). Gaming in 4k is also amazing but most of all, being able to see more content on one screen is what I had been waiting for for years (less scrolling, more working).
The one thing Nazi's had, that ISIS doesn't is government.
I'm pretty sure the Nazis were also pretty big on science (and technology).
Let's not forget that the Nazis (as terrible as they and their methods were) did a lot of great things for their people. I don't see ISIS constructing a legendary countrywide road network, inventing cutting-edge technology, providing affordable transportation, etc ( http://listverse.com/2011/01/3... ). Considering the extremely (backwards) conservative religiously inspired path ISIS is on, it is hard to see how they would bring any benefits of significance to the table for the populace. Straight indoctrination, instilling terror and offering money looks to be their only way of getting people 'behind them'.
2015 will be the year of the Android x86 desktop. Not for everybody, but for all the 'relatives' that are unable to keep their installations clean and sane.
Never mind GP and GGP. Their rectal wall is already well accustomed to the Apple-branded stick poking it.
1. More walking/cycling.
"The average distance travelled per person per year by car ranges from 6,190 km in Japan to 23,130 km in USA."
( http://www.fiafoundation.org/p... - p.3)
Of course, this could also mean that stuff is generally closer to the average Japanese person than to the average USian.
"The data collected showed that Americans, on average, took 5,117 steps a day, far short of the averages in western Australia (9,695 steps), Switzerland (9,650 steps) and Japan (7,168 steps)."
( http://well.blogs.nytimes.com/... )
I'm not sure about obesity rates and diet in Australia and Switzerland, though.
2. Societal pressure
Very few words need to be said about the pressure of Japanese society on its inhabitants. Be(com)ing fat is probably not easy in Japan.
3. Portion sizes
It takes quite some effort to go from 'eat until your plate is empty or you absolutely cannot eat more' to 'eat until you feel satisfied'. It can be done, but it is much easier to just start out with less on your plate. As I believe the Japanese do.
4. Different food flavoring
Very interesting and easily grokked graph:
Not an exhaustive graph, but it's fairly clear that traditional Asian cuisine uses very different ways to add flavour to dishes. I wouldn't be surprised if the effects of consuming higher levels of soy (sauce) affects some obesity-causing mechanisms (insulin production, feelings of satiety, etc.).
When it comes to insulin production, milk also has a special place:
" In one study (PDF), milk was even more insulinogenic than white bread, but less so than whey protein with added lactose and cheese with added lactose. Another study (PDF) found that full-fat fermented milk products and regular full-fat milk were about as insulinogenic as white bread."
( http://www.marksdailyapple.com... )
"The daily per capita consumption of milk is about 105g, roughly one third of the daily per capita consumption in England and Denmark, and less than one-half of that in the U.S. and Australia"
( http://www.dairy.co.jp/eng/eng... )
Exactly. It's not as if Youtube allows everything else.
There is a lot of very very nasty stuff on the internet and I'm pretty sure most of it isn't allowed on Youtube.
This is true and my pants are now definitely starting to change to a brownish hue. Knowing the currently running app greatly simplifies the task for the classifier.
This possibility and security risk is going to disappear in the next version of Android, but is very present in all current versions:
It's on Youtube too:
They could have filtered out the CbCr noise first, though. (NeatImage does this very effectively)
Granted the sophistication of a finely tuned and well crafted attack would mean even I'd fall for it without being any wiser
Although I agree with you in general, the thing is that you need to think of what the effects of a false positive are. Imagine starting up your game of solitaire and then seeing a Gmail-like login window. Because that is what could very well happen and would set off alarm bells in a fairly large set of users.
I suppose you could try to mitigate that by using a generic enough login window and only firing the phishing attack when the model is almost 100% confident that a login window is appropriate. After all, if you can have your app run in the background for several months (or longer), you can afford having it bide its time and wait for the perfect opportunity.
The question then becomes how confident the model can really be. Various methods would probably have to be included to boost the confidence. Checking which apps are installed and only attacking devices that have a pretty default set of FacebookGmailWhatsappCandyCrush apps installed would mitigate the issue of having to deal with colliding signatures of unmodeled apps.
The attack app could even retrieve collect a list of processes ran on the device and/or installed apps, the device type, Android version etc., and then request a classifier from a server, if one exists for that combo. Perhaps different versions of apps could still pose a problem, though.
In addition to the classifier, the app could also retrieve the tuning parameters for that specific device/Android version from a server.
Hmm. It seems a turtle head actually is starting to poke out.
Its a very powerful attack vector
Yes and no.
I'd like to point out that the authors have only used the attack on Galaxy S3 devices running Android 4.2, for a very specific set of apps.
"We run all experiments on Samsung Galaxy S3 devices with Android 4.2. We do not make use of any device-specific features and expect our findings to apply to other Android phones."
Basically, they use the following (world-readable) elements to generate signatures of certain Activities (parts of apps) starting up.:
- CPU usage pattern
- Network usage pattern
- Increase and decrease of the shared memory (where the graphics buffer of the window compositor resides)
(they use more elements, but these are their most important ones: "Thus, the CPU utilization time, the network event and the transition model are the threemost important contributors to the final accuracy. Note that though the Content Provider and input method features have lower contributions, we find that the top 2 and top 3 candidates’ accuracies benefit more from them. This is because they are more stable features, and greatly reduce the cases with extremely poor results due to the high variance in the CPU utilization time and the network features.")
For the apps mentioned, they collect this data for a large number of the same Activities starting up. They average the results (model it using a normal distribution) and use that data as input for an offline machine learning step in which a model is generated.
On the 'hacked' device itself, they can then use the live data in their classifier and predict which Activity is starting up. When a specific target Activity is started up, they immediately start up their own mockup Activity and destroy it after the data has been entered, returning the user to the previous Activity with a misleading 'Server error' dialog in between. This method is what allows the injection to work without requiring the 'draw over other apps'-permission.
Now, anyone who has experience with machine learning can see how these results may not generalise very well, given that they used only a specific set of apps on a specific device. Choosing between 100 alternative Activities is a lot easier than choosing between the millions of Activities out there. How many signature collisions (false positives) would that lead to? A lot.
That is exacerbated by the fact that different users run different sets of apps in the background, which obviously greatly influences the CPU usage signatures and network signatures.
Besides that, the signatures are device and probably Android version specific, leading a model for many devices to become prohibitively large to be distributed in a single app. Of course, this can be mitigated by just targeting one specific very popular device (such as one of the Samsung flagship models).
Their injection of the activity is also something to look at again. Consider this:
"Note that the challenge here is that this introduces a race condition where the injected phishing Activity might enter the foreground too early or too late, causing visual disruption (e.g., broken animation). With carefully designed timing, we prepare the injection at the perfect time without any human-observable glitches during the transition (see video demos )."
Everybody knows that 'carefully designed timing' and generalisable match very poorly. Targeting one specific device may indeed work here, but I think some testing in more varied scenarios is required before we all shit our pants.
Although real time constraints play a part, the main benefit of hierarchies is specialization.
Every decision or action requires a specific skill set and a group as a whole becomes more efficient for every task that is performed by individuals dedicated to that task, simply because those individuals become proficient at it.
Making high-level decisions is also just a task in which someone can become proficient. The problem that we see today with such tasks is that there is a lot of competition for them (which leads to a certain type of individual taking those positions, not because they are fit for the task, but because they compete well in being assigned the task) and that these tasks give a disproportionate amount of power and influence. That combination is toxic.