Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re: Established science CANNOT BE QUESTIONED! (Score 1) 719

If you compare germany with rest of europe then between 2007 and 2012 germany dropped just by 3% while EU average is 12% drop and even USA dropped more. http://ec.europa.eu/eurostat/t... Germany does two steps forward, one step back. It could do much better if it had sensible energy policy.

And if you begin at a date that isn't cherry picked for your argument...

Question is if german policy of dropping nuclear energy and going all green does help or not? Then you should pick appropriate metric and minimize noise.

E.g. http://appsso.eurostat.ec.euro..., comparing 1990 with 2012, you'll get a drop of 24.76%, with only the UK, Denmark and a couple of East Block states better, the "old" EU at 15% less, and the USA a plus of 26%

You added lot of irrelevant noise to get result you want.
In 1990-2007 germany with its sensible policies was leader in reducing CO2 emissions. In 2007-2012 it decided to drop nuke plants and other states fared better.
When you combine these two you get that germany was good but that is despite its later policy as other states did not in 7 years to catched lead it made in previous 17 years. If it continued policy of previous 17 years you would get bigger decrease of emissions. BTW you link show 'Invalid session: xtDataset is null.', proof by inacessibility?

Comment Re: Established science CANNOT BE QUESTIONED! (Score 1) 719

[quote] [quote] [qoute] well, nuclear power is one option, but there are ... less dangerous ones. Take a mix of solar, wind, water power (not just dammed rivers and such, also tidal). [/quote] That does not work yet, take Germany as counterexample. With its green energy policy it in 2013 managed in to increase its CO2 emissions while most of europe decreased its emissions, only Denmark, Estonia, Portugal have bigger increase. http://phys.org/news/2014-05-g... [/quote] Of course that is ignoring that in 2013 Germany still emitted less CO2 than any year up to 2008 (for several decades) - when all NPPs still were running at full power. [/quote] Which is again misleading as emissions decreased everywhere in developed world. That drop could be explained by better insulation and other improvements in efficiency.
If you compare germany with rest of europe then between 2007 and 2012 germany dropped just by 3% while EU average is 12% drop and even USA dropped more. http://ec.europa.eu/eurostat/t... Germany does two steps forward, one step back. It could do much better if it had sensible energy policy.

Comment Re: Established science CANNOT BE QUESTIONED! (Score 1) 719

well, nuclear power is one option, but there are ... less dangerous ones. Take a mix of solar, wind, water power (not just dammed rivers and such, also tidal).

That does not work yet, take Germany as counterexample. With its green energy policy it in 2013 managed in to increase its CO2 emissions while most of europe decreased its emissions, only Denmark, Estonia, Portugal have bigger increase. http://phys.org/news/2014-05-g... Also electricity become 60% more expensive, GDP fallen and industries are migrating from Germany and will become worse as it will add more renewables. Why do you think it will work elsewhere?

Comment Re:So perhaps /. will finally fix its shit (Score 1) 396

Basically any time you are using public wifi, you are vulnerable to a MITM attack. Properly secured HTTPS is safe, but an HTTP website can be modified in any way the attacker desires.

When MITM is possible worrying about self-signed certificates is last of your concerns. Unless you go beyond google proposal and completely disallow http attacker would use zero-day exploit on first http page you load or modify first .exe you download(of course signed by sony.) As now you have keylogger on you computer you do not have to worry how much https protects you.

Comment Re:Good, we're not trying to create more work (Score 1) 688

Assuming the economic system supports this.

That is not economic problem its political one. You could add basic income without changing govement wealth redistribution much. You create basic of income equal to subsidies to poor which you cancel. Then you revise tax code and cancel tax deducible items equal to workers basic income. You will get simpler tax system without changing much of citizens taxes/gains. Of course no sane politican will propose it, then he could not promise say increasing dotations to families with children to win three districts. Also with several advancements of robotic and some goverment can create basic income, cancel all taxes and substain stable population indefinitely. He just buys self-repairing factories that makes solar-powered farmerbots, deliverybots and chefbots. Then every citizen could decide to sell what his bot produced or get delivered free food.

Comment Re:503 (Score 1) 396

Unless you can verify the authenticity of the self-signed cert your connection is prone to an active MITM attack. Active attacker Charlie can literally just intercept the certificate you're being sent, substitute it for their own self-signed cert, and neither party is any the wiser. Yeah, your connection will be encrypted, but it'll be decrypted and re-encrypted by the attacker. The scenario is unlikely unless you're a "person of interest", but unless you have some Out-Of-Band verification of authenticity you're essentially just wasting CPU cycles encrypting the packets.

Repeating misinformation does it make true. With self-signed certificate you are probably safe from NSA snooping unless it intercepts all connections. When MITM is involved certificates wont save you. Say you typed google.com to address bar. Then determined phiser would modify DNS to say that google.com is http only. A loaded fake http://google.com/ page would contain redirection to https://goog1e.com/ As attacker registered goog1e.com domain and got certificate from startssl.com or other CA that does not bother checking anything besides that you own domain you are owned anyway.

Comment Re:No (Score 1) 126

Of course not. This is just as stupid as asking if you could calculate somebody's phone number.

Actually calculating phone numbers is simple, you just need to start from contradiction. You can derive anything from that including your mom's phone number. Its called principle of explosion.

Comment Re:Issues (Score 1) 312

O(log(n)) for each and every insertion, yes... when you are doing n insertions, that becomes O(n log n). If you are trying to compute the mean deviation at every step as well, you are looking at O(n^2 log n),

No, you just failed data structure class. Insertion takes O(log(n)) with bookkeeping needed to find mean standard deviation in O(log(n)) time which gives a O(n log n) total time. All you need to know to calculate deviation is sum and number of elements above mean and sum of elements below mean. I explained it in more detail in parent post, there is standard data structure that can calculate sum of elements in of elements in given range in O(log(n)) time and supports insertion in O(log (n)) time.
A quick google query found following implementation: http://kaba.hilvi.org/pastel/pastel/sys/redblacktree.htm

because you cannot compute the mean deviation without revisiting *every* element you've collected so far, regardless of how they are stored or sorted.

Repeating a lie does not make it true.

Comment Re:Issues (Score 1) 312

Collecting the data alone is a log(n) step... and can be worse if you are trying to keep the data sorted while you collect it.

Use red-black trees these keep data sorted with O(log(n)) worst case bound for insertion.

How can you calculate the mean deviation at any time without revisiting all of the data points that you have collected so far? How can you calculate do it in any time better than O(n)? Calculating standard deviation takes O(1) and does not require reexamining the data at all if you've been keeping track of right things during data collection (which still takes O(n)).

That is typical exam question for data structures class. You maintain a red-black tree and for node you keep a sum and count of elements of its subtree (you need to update these in rotation and thats it). As red-black tree has logarithmic height you easily find sum of elements greater than given number in logarithmic time. Just do binary search and sum values for subtrees whose smallest element is greater than searched element.
Once you have that a mean absolute difference by following expression
(sum_greater(mean) - count_greater(mean) * mean) + (count_less(mean) * mean - sum_less(mean))
and you can get each term in O(log (n)) time.

Comment Re: Basic Statistics (Score 1) 312

Bzzt. Mathematically correct but practically wrong. Any real or simulated dataset from which you would want to compute a standard deviation will have the property that it will be a list of (most likely) double precision floating that is finite in size. This data defines a distribution that always has a finite first and second moment, so you will get a number that you can confidently call the standard deviation of the data. Even if it comes from physical process with a nonsense distribution like a Cauchy distribution, the standard deviation you compute will give you a bound on the spread of your data. If it's Gaussian, you can go back to your statistics class and say that 95% of the data will be within two SD's, etc. If it's not, you can use the Chebyshev rule (http://en.wikipedia.org/wiki/Chebyshev_inequality) to say that at least 75 percent of the data will be in two SD's, 89% will be within 3 SD's, etc, which is much coarser information, but is still reasonable to look at for worst-case analysis.

Yes but also useless when you have enough data to get reasonable mean estimate. You do not need Chebyshev inequality for getting confidence intervals, just compute appropriate 2.5percentile and 97.5percentile for 95% interval. By Glivenkoâ"Cantelli theorem these for measurable function converge regardless to distribution and are not sensitive to outliers.

Comment Re:Issues (Score 1) 312

ncorrect... you need one pass to collect the data, and a second pass to compute the mean deviation. Both passes are O(n). You do not need to do a second pass to compute the standard deviation, it can be calculated in O(1) time based on data collected in the first pass. If you are only computing this once, doing two O(n)'s is just O(n), but if you are wanting to continually recalculate the mean as you add more elements to your data set, then the difference between them becomes much larger... mean deviation with data collection ends up being quadratic with the amount of data collected, while standard deviation with data collection remains linear with the amount of data collected.

Still incorrect, you need to know data structures for that. When you use a red-black tree where you in node maintain sum of element below node then you can compute sum of elements in arbitrary interval in O(log(n)) time. That cuts complexity from quadratic to O(n log n)

Comment Re:Good Stuff (Score 1) 92

Well, superconductors killed my dad, so I'm looking for an immediate ban. If you don't like that, you can just say that directly to distraught face of my poor widowed mother. Superconductors also stole all of the insurance money and repeatedly raped my sister. Well, she called it rape, but really there was no resistance.

Sorry, we are superconductors. Resistance is futile.

Comment Re:I Used a Popular Online Tax Service... (Score 1) 237

1. Buy a stock that you expect to decrease in value in the short term, but to make money in the long term. You pay, say, $10,000.

2. It drops to $5,000. Sell, you can mark off the $5,000 loss on your taxes.

3. Wait 30 days, then take that $5,000 and buy the same stock again. You can still take the $5,000 loss, but if (when) the stock finally appreciates, you make money there, too. :)

What about following plan.

1. Put $10000 in bank.

2. Wait 30 days, buy $7500 of stock and $2500 for taxes.

3. ???

4. Profit

Comment Re:GCJ vs. JIT (Score 1) 181

P.S.: While I understand that much C/C++ syntax is driven by prior choices, much of this new syntax is UGLY. That's been a problem ever since templates started appearing, but it's gotten worse with every addition. At some point they need to do a de novo redefinition of syntax, and define an isomorphism between the two syntaxes. Then a compiler switch can alternate between syntaxes until the current version can be deprecated. I'm starting to think that APL had a better design than modern C++, and that was BAD. Now, in addition to > they've got [[ ]], and I guess next will be (( )) (unless that's already in use somewhere).

Yes, (( )) is used in attributes. Also ({ }) is used to convert compound statement into expression. You are left with {{ and {( and when these will be taken we start to use {) (}.

Slashdot Top Deals

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...