Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:4x strategy when? (Score 1) 58

> One of the constraints for any interesting solution is that the AIs not prioritize beating the human player over the other AIs Thats arbitrary constraint that is hard to enforce. If computer learns about other players he finds that attacking human while weak increases his winning chances. And there is no need to discriminate AI vs human, just give human 1/N of times game while rest of time write him message surrender or you will be assimilated. The only problem is learning player identity/trustworthness to enforce contracts. Denying that to AI is human cheating as humans learn how to beat specific AI > ; and that the AIs are each playing to win themselves. In multiplayer game best strategy is to cooperate. Its much better when two players do alliane on turn 1 and coordinate attacks destroying everybody else and then have 50% chance on endgame than go to war at start and be weakened. Similar diplomatic AI would make temporary allliance of all neighbours of nation to dogpile him. So if there is politics its evolves to borg-like strategy. At first turn N/2+1 civs would make alliance to eliminate others. Only way for rest would be to make secondary alliance. And in any case human should follow orders of his robotic overlord or be punished as traitor.

Comment Re:4x strategy when? (Score 1) 58

That is not main problem. Main problems are that players won't pay extra for better AI so managers decide to save on AI. Players only say that they want challenging AI yet expect to beat it every time.
Its easy to make strategy where AI reigns supreme. It would be focused on micromanagement ideally with complex resource system.
For 4X there is simple AI strategy that would incredibly piss players: Borg diplomacy. At first turn all AI players do distributed roll of dice to select borg player. Every other bot transfers all his resources to borg or makes everything for borg to win. This improves winning chances of each invidiual bot. So we could focus on games 1x1.
Also there is unbeatable starcraft AI since 2010 http://kotaku.com/5667280/you-...
You could also try C-evo where AI doesn't cheat but its bit old.
Also main human advantage is in combat. Without that it turns into building race that is easier to optimize.
As you mentioned Civ 5 main problem is that AI is terrible. It shouldn't be too hard add mods so you will rarely wins. Just take highly formulaic strategies from forums and have Etiopian AI doing regilious/cultural ICS, babylonian science victory, greek diplomatic... you can't block them all.

Comment Re:That's not the purpose for these lab machines (Score 1) 181

One purpose for research-lab "true" visualizations is to be successful honey-pots, allowing malware to be studied in a captive environment without giving away the fact that it's a captive environment.

If that would case they would add something better than looking for file which could be circumvented just by deleting it/not clicking install service button in virtualbox.

Comment Re:"True" virtualization (Score 1) 181

There is a place in research labs for "true" virtualization/emulation, where a particular hardware environment is virtualized/emulated right down to the timing characteristics of the hardware it's pretending to be.

But randsomware authors are not interested at that. As in previous story they do price gouging how much you are willing to pay. As they won't get penny from vm they do not bother with these.

Comment Re: Established science CANNOT BE QUESTIONED! (Score 1) 719

If you compare germany with rest of europe then between 2007 and 2012 germany dropped just by 3% while EU average is 12% drop and even USA dropped more. http://ec.europa.eu/eurostat/t... Germany does two steps forward, one step back. It could do much better if it had sensible energy policy.

And if you begin at a date that isn't cherry picked for your argument...

Question is if german policy of dropping nuclear energy and going all green does help or not? Then you should pick appropriate metric and minimize noise.

E.g. http://appsso.eurostat.ec.euro..., comparing 1990 with 2012, you'll get a drop of 24.76%, with only the UK, Denmark and a couple of East Block states better, the "old" EU at 15% less, and the USA a plus of 26%

You added lot of irrelevant noise to get result you want.
In 1990-2007 germany with its sensible policies was leader in reducing CO2 emissions. In 2007-2012 it decided to drop nuke plants and other states fared better.
When you combine these two you get that germany was good but that is despite its later policy as other states did not in 7 years to catched lead it made in previous 17 years. If it continued policy of previous 17 years you would get bigger decrease of emissions. BTW you link show 'Invalid session: xtDataset is null.', proof by inacessibility?

Comment Re: Established science CANNOT BE QUESTIONED! (Score 1) 719

[quote] [quote] [qoute] well, nuclear power is one option, but there are ... less dangerous ones. Take a mix of solar, wind, water power (not just dammed rivers and such, also tidal). [/quote] That does not work yet, take Germany as counterexample. With its green energy policy it in 2013 managed in to increase its CO2 emissions while most of europe decreased its emissions, only Denmark, Estonia, Portugal have bigger increase. http://phys.org/news/2014-05-g... [/quote] Of course that is ignoring that in 2013 Germany still emitted less CO2 than any year up to 2008 (for several decades) - when all NPPs still were running at full power. [/quote] Which is again misleading as emissions decreased everywhere in developed world. That drop could be explained by better insulation and other improvements in efficiency.
If you compare germany with rest of europe then between 2007 and 2012 germany dropped just by 3% while EU average is 12% drop and even USA dropped more. http://ec.europa.eu/eurostat/t... Germany does two steps forward, one step back. It could do much better if it had sensible energy policy.

Comment Re: Established science CANNOT BE QUESTIONED! (Score 1) 719

well, nuclear power is one option, but there are ... less dangerous ones. Take a mix of solar, wind, water power (not just dammed rivers and such, also tidal).

That does not work yet, take Germany as counterexample. With its green energy policy it in 2013 managed in to increase its CO2 emissions while most of europe decreased its emissions, only Denmark, Estonia, Portugal have bigger increase. http://phys.org/news/2014-05-g... Also electricity become 60% more expensive, GDP fallen and industries are migrating from Germany and will become worse as it will add more renewables. Why do you think it will work elsewhere?

Comment Re:So perhaps /. will finally fix its shit (Score 1) 396

Basically any time you are using public wifi, you are vulnerable to a MITM attack. Properly secured HTTPS is safe, but an HTTP website can be modified in any way the attacker desires.

When MITM is possible worrying about self-signed certificates is last of your concerns. Unless you go beyond google proposal and completely disallow http attacker would use zero-day exploit on first http page you load or modify first .exe you download(of course signed by sony.) As now you have keylogger on you computer you do not have to worry how much https protects you.

Comment Re:Good, we're not trying to create more work (Score 1) 688

Assuming the economic system supports this.

That is not economic problem its political one. You could add basic income without changing govement wealth redistribution much. You create basic of income equal to subsidies to poor which you cancel. Then you revise tax code and cancel tax deducible items equal to workers basic income. You will get simpler tax system without changing much of citizens taxes/gains. Of course no sane politican will propose it, then he could not promise say increasing dotations to families with children to win three districts. Also with several advancements of robotic and some goverment can create basic income, cancel all taxes and substain stable population indefinitely. He just buys self-repairing factories that makes solar-powered farmerbots, deliverybots and chefbots. Then every citizen could decide to sell what his bot produced or get delivered free food.

Comment Re:503 (Score 1) 396

Unless you can verify the authenticity of the self-signed cert your connection is prone to an active MITM attack. Active attacker Charlie can literally just intercept the certificate you're being sent, substitute it for their own self-signed cert, and neither party is any the wiser. Yeah, your connection will be encrypted, but it'll be decrypted and re-encrypted by the attacker. The scenario is unlikely unless you're a "person of interest", but unless you have some Out-Of-Band verification of authenticity you're essentially just wasting CPU cycles encrypting the packets.

Repeating misinformation does it make true. With self-signed certificate you are probably safe from NSA snooping unless it intercepts all connections. When MITM is involved certificates wont save you. Say you typed google.com to address bar. Then determined phiser would modify DNS to say that google.com is http only. A loaded fake http://google.com/ page would contain redirection to https://goog1e.com/ As attacker registered goog1e.com domain and got certificate from startssl.com or other CA that does not bother checking anything besides that you own domain you are owned anyway.

Comment Re:No (Score 1) 126

Of course not. This is just as stupid as asking if you could calculate somebody's phone number.

Actually calculating phone numbers is simple, you just need to start from contradiction. You can derive anything from that including your mom's phone number. Its called principle of explosion.

Comment Re:Issues (Score 1) 312

O(log(n)) for each and every insertion, yes... when you are doing n insertions, that becomes O(n log n). If you are trying to compute the mean deviation at every step as well, you are looking at O(n^2 log n),

No, you just failed data structure class. Insertion takes O(log(n)) with bookkeeping needed to find mean standard deviation in O(log(n)) time which gives a O(n log n) total time. All you need to know to calculate deviation is sum and number of elements above mean and sum of elements below mean. I explained it in more detail in parent post, there is standard data structure that can calculate sum of elements in of elements in given range in O(log(n)) time and supports insertion in O(log (n)) time.
A quick google query found following implementation: http://kaba.hilvi.org/pastel/pastel/sys/redblacktree.htm

because you cannot compute the mean deviation without revisiting *every* element you've collected so far, regardless of how they are stored or sorted.

Repeating a lie does not make it true.

Comment Re:Issues (Score 1) 312

Collecting the data alone is a log(n) step... and can be worse if you are trying to keep the data sorted while you collect it.

Use red-black trees these keep data sorted with O(log(n)) worst case bound for insertion.

How can you calculate the mean deviation at any time without revisiting all of the data points that you have collected so far? How can you calculate do it in any time better than O(n)? Calculating standard deviation takes O(1) and does not require reexamining the data at all if you've been keeping track of right things during data collection (which still takes O(n)).

That is typical exam question for data structures class. You maintain a red-black tree and for node you keep a sum and count of elements of its subtree (you need to update these in rotation and thats it). As red-black tree has logarithmic height you easily find sum of elements greater than given number in logarithmic time. Just do binary search and sum values for subtrees whose smallest element is greater than searched element.
Once you have that a mean absolute difference by following expression
(sum_greater(mean) - count_greater(mean) * mean) + (count_less(mean) * mean - sum_less(mean))
and you can get each term in O(log (n)) time.

Comment Re: Basic Statistics (Score 1) 312

Bzzt. Mathematically correct but practically wrong. Any real or simulated dataset from which you would want to compute a standard deviation will have the property that it will be a list of (most likely) double precision floating that is finite in size. This data defines a distribution that always has a finite first and second moment, so you will get a number that you can confidently call the standard deviation of the data. Even if it comes from physical process with a nonsense distribution like a Cauchy distribution, the standard deviation you compute will give you a bound on the spread of your data. If it's Gaussian, you can go back to your statistics class and say that 95% of the data will be within two SD's, etc. If it's not, you can use the Chebyshev rule (http://en.wikipedia.org/wiki/Chebyshev_inequality) to say that at least 75 percent of the data will be in two SD's, 89% will be within 3 SD's, etc, which is much coarser information, but is still reasonable to look at for worst-case analysis.

Yes but also useless when you have enough data to get reasonable mean estimate. You do not need Chebyshev inequality for getting confidence intervals, just compute appropriate 2.5percentile and 97.5percentile for 95% interval. By Glivenkoâ"Cantelli theorem these for measurable function converge regardless to distribution and are not sensitive to outliers.

Comment Re:Issues (Score 1) 312

ncorrect... you need one pass to collect the data, and a second pass to compute the mean deviation. Both passes are O(n). You do not need to do a second pass to compute the standard deviation, it can be calculated in O(1) time based on data collected in the first pass. If you are only computing this once, doing two O(n)'s is just O(n), but if you are wanting to continually recalculate the mean as you add more elements to your data set, then the difference between them becomes much larger... mean deviation with data collection ends up being quadratic with the amount of data collected, while standard deviation with data collection remains linear with the amount of data collected.

Still incorrect, you need to know data structures for that. When you use a red-black tree where you in node maintain sum of element below node then you can compute sum of elements in arbitrary interval in O(log(n)) time. That cuts complexity from quadratic to O(n log n)

Slashdot Top Deals

How many Bavarian Illuminati does it take to screw in a lightbulb? Three: one to screw it in, and one to confuse the issue.

Working...