One purpose for research-lab "true" visualizations is to be successful honey-pots, allowing malware to be studied in a captive environment without giving away the fact that it's a captive environment.
If that would case they would add something better than looking for file which could be circumvented just by deleting it/not clicking install service button in virtualbox.
There is a place in research labs for "true" virtualization/emulation, where a particular hardware environment is virtualized/emulated right down to the timing characteristics of the hardware it's pretending to be.
But randsomware authors are not interested at that. As in previous story they do price gouging how much you are willing to pay. As they won't get penny from vm they do not bother with these.
If you compare germany with rest of europe then between 2007 and 2012 germany dropped just by 3% while EU average is 12% drop and even USA dropped more. http://ec.europa.eu/eurostat/t... Germany does two steps forward, one step back. It could do much better if it had sensible energy policy.
And if you begin at a date that isn't cherry picked for your argument...
Question is if german policy of dropping nuclear energy and going all green does help or not? Then you should pick appropriate metric and minimize noise.
E.g. http://appsso.eurostat.ec.euro..., comparing 1990 with 2012, you'll get a drop of 24.76%, with only the UK, Denmark and a couple of East Block states better, the "old" EU at 15% less, and the USA a plus of 26%
You added lot of irrelevant noise to get result you want.
In 1990-2007 germany with its sensible policies was leader in reducing CO2 emissions. In 2007-2012 it decided to drop nuke plants and other states fared better.
When you combine these two you get that germany was good but that is despite its later policy as other states did not in 7 years to catched lead it made in previous 17 years. If it continued policy of previous 17 years you would get bigger decrease of emissions.
BTW you link show 'Invalid session: xtDataset is null.', proof by inacessibility?
well, nuclear power is one option, but there are
That does not work yet, take Germany as counterexample. With its green energy policy it in 2013 managed in to increase its CO2 emissions while most of europe decreased its emissions, only Denmark, Estonia, Portugal have bigger increase. http://phys.org/news/2014-05-g... Also electricity become 60% more expensive, GDP fallen and industries are migrating from Germany and will become worse as it will add more renewables. Why do you think it will work elsewhere?
Basically any time you are using public wifi, you are vulnerable to a MITM attack. Properly secured HTTPS is safe, but an HTTP website can be modified in any way the attacker desires.
When MITM is possible worrying about self-signed certificates is last of your concerns. Unless you go beyond google proposal and completely disallow http attacker would use zero-day exploit on first http page you load or modify first
Assuming the economic system supports this.
That is not economic problem its political one. You could add basic income without changing govement wealth redistribution much. You create basic of income equal to subsidies to poor which you cancel. Then you revise tax code and cancel tax deducible items equal to workers basic income. You will get simpler tax system without changing much of citizens taxes/gains. Of course no sane politican will propose it, then he could not promise say increasing dotations to families with children to win three districts. Also with several advancements of robotic and some goverment can create basic income, cancel all taxes and substain stable population indefinitely. He just buys self-repairing factories that makes solar-powered farmerbots, deliverybots and chefbots. Then every citizen could decide to sell what his bot produced or get delivered free food.
Unless you can verify the authenticity of the self-signed cert your connection is prone to an active MITM attack. Active attacker Charlie can literally just intercept the certificate you're being sent, substitute it for their own self-signed cert, and neither party is any the wiser. Yeah, your connection will be encrypted, but it'll be decrypted and re-encrypted by the attacker. The scenario is unlikely unless you're a "person of interest", but unless you have some Out-Of-Band verification of authenticity you're essentially just wasting CPU cycles encrypting the packets.
Repeating misinformation does it make true. With self-signed certificate you are probably safe from NSA snooping unless it intercepts all connections. When MITM is involved certificates wont save you. Say you typed google.com to address bar. Then determined phiser would modify DNS to say that google.com is http only. A loaded fake http://google.com/ page would contain redirection to https://goog1e.com/ As attacker registered goog1e.com domain and got certificate from startssl.com or other CA that does not bother checking anything besides that you own domain you are owned anyway.
Of course not. This is just as stupid as asking if you could calculate somebody's phone number.
Actually calculating phone numbers is simple, you just need to start from contradiction. You can derive anything from that including your mom's phone number. Its called principle of explosion.
O(log(n)) for each and every insertion, yes... when you are doing n insertions, that becomes O(n log n). If you are trying to compute the mean deviation at every step as well, you are looking at O(n^2 log n),
No, you just failed data structure class. Insertion takes O(log(n)) with bookkeeping needed to find mean standard deviation in O(log(n)) time which gives a O(n log n) total time. All you need to know to calculate deviation is sum and number of elements above mean and sum of elements below mean. I explained it in more detail in parent post, there is standard data structure that can calculate sum of elements in of elements in given range in O(log(n)) time and supports insertion in O(log (n)) time.
A quick google query found following implementation:
http://kaba.hilvi.org/pastel/pastel/sys/redblacktree.htm
because you cannot compute the mean deviation without revisiting *every* element you've collected so far, regardless of how they are stored or sorted.
Repeating a lie does not make it true.
Collecting the data alone is a log(n) step... and can be worse if you are trying to keep the data sorted while you collect it.
Use red-black trees these keep data sorted with O(log(n)) worst case bound for insertion.
How can you calculate the mean deviation at any time without revisiting all of the data points that you have collected so far? How can you calculate do it in any time better than O(n)? Calculating standard deviation takes O(1) and does not require reexamining the data at all if you've been keeping track of right things during data collection (which still takes O(n)).
That is typical exam question for data structures class. You maintain a red-black tree and for node you keep a sum and count of elements of its subtree (you need to update these in rotation and thats it). As red-black tree has logarithmic height you easily find sum of elements greater than given number in logarithmic time. Just do binary search and sum values for subtrees whose smallest element is greater than searched element.
Once you have that a mean absolute difference by following expression
(sum_greater(mean) - count_greater(mean) * mean) + (count_less(mean) * mean - sum_less(mean))
and you can get each term in O(log (n)) time.
Bzzt. Mathematically correct but practically wrong. Any real or simulated dataset from which you would want to compute a standard deviation will have the property that it will be a list of (most likely) double precision floating that is finite in size. This data defines a distribution that always has a finite first and second moment, so you will get a number that you can confidently call the standard deviation of the data. Even if it comes from physical process with a nonsense distribution like a Cauchy distribution, the standard deviation you compute will give you a bound on the spread of your data. If it's Gaussian, you can go back to your statistics class and say that 95% of the data will be within two SD's, etc. If it's not, you can use the Chebyshev rule (http://en.wikipedia.org/wiki/Chebyshev_inequality) to say that at least 75 percent of the data will be in two SD's, 89% will be within 3 SD's, etc, which is much coarser information, but is still reasonable to look at for worst-case analysis.
Yes but also useless when you have enough data to get reasonable mean estimate. You do not need Chebyshev inequality for getting confidence intervals, just compute appropriate 2.5percentile and 97.5percentile for 95% interval. By Glivenkoâ"Cantelli theorem these for measurable function converge regardless to distribution and are not sensitive to outliers.
ncorrect... you need one pass to collect the data, and a second pass to compute the mean deviation. Both passes are O(n). You do not need to do a second pass to compute the standard deviation, it can be calculated in O(1) time based on data collected in the first pass. If you are only computing this once, doing two O(n)'s is just O(n), but if you are wanting to continually recalculate the mean as you add more elements to your data set, then the difference between them becomes much larger... mean deviation with data collection ends up being quadratic with the amount of data collected, while standard deviation with data collection remains linear with the amount of data collected.
Still incorrect, you need to know data structures for that. When you use a red-black tree where you in node maintain sum of element below node then you can compute sum of elements in arbitrary interval in O(log(n)) time. That cuts complexity from quadratic to O(n log n)
How many Bavarian Illuminati does it take to screw in a lightbulb? Three: one to screw it in, and one to confuse the issue.