Please create an account to participate in the Slashdot moderation system


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Not that new (Score 1) 59

by Too Much Noise (#48012671) Attached to: Researchers Develop Purely Optical Cloaking

but it only really works if the object has to stay within a certain limited area

Actually, it's even more trivial than that. As they explicitly say in the video, the object has to stay out of the central area. Why? because the central area is where you're focusing the light. Now if they would only take those four lenses, put them in a tube and 'cloak' an absorber around the focal point to remove stray light, they would have a marvelous invention. I suggest calling it a telescope.

Welcome to elementary optics class, now with Harry Potter themed experiments.

Comment: Re:OKC's match algos suck (Score 1) 161

by Too Much Noise (#47555433) Attached to: OKCupid Experiments on Users Too

It's called the "tyrrany of dimensions". The more variables you have, the more data points you need exponentially to derive meaningful partitioning analysis from it, regardless of how clever your distance algorithms are.

Indeed, but only if you insist on carrying along in your analysis all the irrelevant and correlated dimensions.

And they have hundreds of questions when a dozen would be about all the entire population of Earth could support.

So do surveys, for significantly smaller sample sizes. I wouldn't be surprised if a non-trivial percentage of those questions are intentionally redundant - you know, to check *ahem* consistency, improve accuracy, etc. If, say, you have 100 questions grouped into 10 categories with 10q/cat, you have just dropped the dimensionality significantly while at the same time having more confidence in your data. A rule of thumb in surveys is don't trust the user^W^W^W^W *ahem* trust, but verify.

Comment: Re:Ingress is unclear: not inverse cube force (Score 2) 26

It is the strength of the interaction that is found to be inverse cubic. The strength of magnetic force is inverse quadratic. If somebody found evidence of an inverse cubic force then this would be evidence of higher-spatial dimensions and very unexpected indeed.

How did you get modded informative? The magnetic component of the force between electrons in this case is indeed proportional to the inverse cube of the distance. Elementary magnetostatics, since it's the interaction force between two magnetic dipoles (look up dipole-dipole interaction if you want to see the formula). No higher dimensions or other mumbo jumbo required.

Comment: Re:Now that Lewis's 15 minutes are up... (Score 2) 382

by Too Much Noise (#47177383) Attached to: High Frequency Trading and Finance's Race To Irrelevance

Until people can recognize the difference between front running (a biased ordering of particular market events) and high frequency trading (low latency response to available market data) then there really is no point in responding to this nonsense.

You seem confused about what frequency means - hint, it's not the inverse of latency. HFT is about (very) low asset holding times, not low latency of the response (although the latter is a necessary means). Case in point, the low latency part, when uses to provide liquidity (as the standard argument goes) would be indifferent to trading patterns - much like a market maker in a stock doesn't pick and choose trades and usually has a requirement to, you know, be there to make the market if needed. HFT, in the fast flipping sense that gave the name, has no such compulsions and very much cares about trading patterns, which together with trend hunting algos has a negative effect on price stability (statistically prone to abrupt swings in both directions).

So do try to understand that high frequency and low latency do not describe the same thing. Otherwise people might start to think that there really is no point in responding to your posts.

Comment: Re:Multiplatform? (Score 3, Interesting) 164

by Too Much Noise (#47030099) Attached to: 30-Day Status Update On LibreSSL

It does indeed appear to be OpenBSD only at present (from ):

... and not really that multiplatform for future development, either, since it requires (as per the linked slide)

Modern C string capabilities (strl[cat,cpy]) asprintf etc.

None of the quoted functions are standard C and strl* are BSD-only - yay for GNU-BSD strn*/strl* string function wars :(

It's all nice and good practice that they want to use the best tools available to them on OpenBSD, but not caring for what's available on other platforms is not really how one does portability and *will* produce forks, regardless how much the LibreSSL authors want to 'discourage' it.

Comment: Re:Wrong interpretation of energy (Score 2) 135

by Too Much Noise (#46913031) Attached to: Is There a Limit To a Laser's Energy?

He is indeed talking about 1 MeV per photon.

he is jumbling together a lot of nonsense, imnsho.

He starts with the idea of an ordinary laser. Those are not even in the X-ray range, nevermind the MeV gamma-ray range. Then he wants to 'compress' the lasing cavity to *ahem* reach black-hole level of energy densities. While you can transfer energy to the radiation field (thus shifting up photon energies from the visible/UV range) you'll need a HECK of a fast compression to reach the electron-positron generation threshold. So that's nonsense.

Second, lasing does not happen in effing vacuum. Your first problem if you increase photon density, assuming your mirrors do not start to degrade before that, is nonlinear effects. Both in the lasing medium and in the mirrors. You start losing photons via multiple photon absorption that will give you back a higher energy photon that most likely escapes your cavity (goes in the wrong direction most of the time, and when it goes in the right direction the decreased mirror reflectivity and absorption/reemission x-section will not keep that energy contained for long). He never even sees that one coming.

Third, his armchair laser building scenario conveniently ignores all the losses that a real laser system has to contend with. The most obvious part being heat dissipation. Your pretty 99.999% reflective mirror will start to degrade rather quickly if you increase too much the incident radiation density without keeping it adequately cooled (this goes back to several things - normal absorption coefficient in that 0.001% that does not get reflected, having a lasing medium inside the cavity that loses energy to walls, nonlinear absorption effects in the mirror, etc.). Once that happens, you start to say goodbye to the containment properties of your lasing cavity, and thus to your 'bajillion increase in laser field energy density' plans for taking over the world. Try again tomorrow night, Brain.

Fourth ... bah, why bother. This is pretty much a jumbled collection of ideas that you'd expect from someone taking a first course in a given field and imagining things without an effective reality check. Perfect /. front page material.

Comment: Re:Well, that does it (Score 1) 148

The most stable currency on the planet is the swiss franc

Shows what you know - the Swiss National Bank has maintained for the last few years an official 1.20 peg on EURCHF, by not letting the CHF appreciate more than that wrt EUR. Quite a remarkable thing, considering all the speculator howling at the time the peg was announced, basically everyone and their dog predicting a broken peg in a matter of months.

Regardless, that makes the CHF pretty much as stable as the EUR, so maybe you should reconsider looking down your nose on the economic knowledge of McD assistants. Vanity is such a funny thing, wouldn't you agree?

Comment: Re:Should be easy to prove or dis-prove (Score 1) 335

by Too Much Noise (#46541429) Attached to: Nate Silver's New Site Stirs Climate Controversy

He also cited a U.N. climate report, along with his own research, to assert that extreme weather events have not been increasing in frequency or intensity.

Actually, no. I know, RTFA and all, but maybe you should work on it a bit?

He actually explicitly says that very costly extreme events did not increase in frequency and the ones that did increase, like heat waves and almost-but-not-quite-floods, do not make a major appearance on the cost maps. To wit:

In fact, today's climate models suggest that future changes in extremes that cause the most damage won't be detectable in the statistics of weather (or damage) for many decades.

Basically, that looking at absolute numbers of monetary damage is the wrong statistic for gauging overall extreme weather evolution. That's all there is to it.

Now, of course, his 'analysis' is quite flimsy, consisting in only normalzing overall disaster costs by GDP, with no crosstabbing for other factors. It has merits for pointing out the obvious pitfalls of lumping numbers together with little thought about what they mean. OTOH, if I had lost a wooden cabin to a tornado 20 years ago and last year the replacement concrete house also went the way of the dodo from a tornado, of course costs went up. But the sturdiness of the construction also went up, so it really does not rule out an increase in tornado intensity. And, contrary to some posts here, he did not take increased resiliency into account - no way he could, since he's using global statistics data that lumps together the SE Asia tsunami and US hurricanes.

Comment: Re:it won't fit? (Score 1) 232

So by your own admission it's now 'security that *ahem* silently Just Fails to Work on *all* installation media'? Awesome. Having it work on all - 1 (actually all - see below, but what's 1 between internet strangers) will definitely be a huge step back.

Besides, nobody said anything about 'silently failing' - you can put a big red warning sign about it on the download page. Also, you should still check the image signature for that itty bitty tiny floppy install to validate its integrity (as one would do with any install medium), and package sigs can be checked outside the installation procedure anyway. So I'm kind of mystified as to what point you were trying to make.

Comment: Re:it won't fit? (Score 1) 232

Even giving it the benefit of the doubt, what would break the process so horribly if a separately packed floppy disk installer does not check signatures (link gpgv to /bin/true for instance) while the other installers do? Floppy users don't lose or gain anything while the rest get the benefit of an untampered source assurance. Or are they also trying to argue that adding signatures won't let the regular installation packages fit on floppy disks?

Comment: Re:Why can't we make it here? (Score 5, Informative) 1160

Apparently a combination of regulations and manufacturing problems. See here:

Now that is old news (2010) and apparently both Teva and Hospira are going to restart production ... slowly. However, unless and until they get a significant output going (not soon), Fresenius is the sole supplier, more or less. See here:

Comment: Re:This is more sensationalism than any real threa (Score 1) 189

by Too Much Noise (#44336249) Attached to: Collision Between Water and Energy Is Underway, and Worsening

So at 40% per year, in two and a half years there will be no water left in the bank. We are Doomed.

You my friend need to learn about exponential growth and, as in this case, decay. At 40% withdrawals each year there'll be water for ... somewhat more than a hundred years. By then we'll have the technology to give each citizen the correct number of water molecules they're allowed to withdraw from the bank.

Sadly, the H2O molecule is finite, however small - were water infinitely divisible we'd have had water forever AND test Planck scale effects in the not too distant future. Provided we also developed suitably small spoons, of course.

Comment: Re:One Framework to rule them all... (Score 2) 86

Nokia messed up by not staying the course. And now they announced they are happy being the challenger. Seriously?

It's understandable that one does not have time to keep up with the news, but at least RTF summary TITLE. Digia releases Qt 5.1. Nokia has had nothing to do with Qt for the last year or so, which is a Good Thing for the toolkit's evolution.

Comment: Re:Good (Score 1) 476

by Too Much Noise (#44047443) Attached to: Have We Hit Peak HFT?

Since you want to touch on this ... heck, I have karma to burn, so why not?

Third, please find me a free market today that works the way the model predicts, and isn't literally destroying people in the process, merely to maximize profits.

As you well pointed out, this is not a realistic option. And the cause is simple, profit maximization misalignment with other motivations - in fact, it's inherent in free markets, where competing interests (for profit) motivate everything. Hence various side effects like negative externalities, worker exploitation, slavery, and so on.

My personal take on this is that while a real-world free market would be a system with too much complexity and chaotic behaviour to represent in a general model, simplified models with free-market rules can easily show ways in which the system reaches ... inefficient outcomes, like unfortunate equilibrium points (such as monopolies) or destructive oscillating behaviour (boom-bust cycles). It does not even require human flaws to get there, game theoretical models with rational agents will reach the same problem, due to inherent limitations such as imperfect information and unstable equilibria. Socialism in turn, besides not really being the alternative, has its own issues, some specific and some not so much (the unstable equilibrium of requiring every participant to have motivations aligned with a common good for one). The tricky part is, imho, the fact that legislating the market is not a true solution - introduce into the game the motivations of legislators and the issue of technical expertise required for tuning the system and you'll just run into a different 'who watches the watchers' class of problems.

To close this rant, IMHO markets need to be seen through dynamic models where there are often (if not always) segments to be watched for developments of bad outcomes - and I'm including regulators in the definition of 'markets'. Having a blind faith in either 'free' or 'regulated' is a lazy man's easy way out of an argument that is too complex to tackle, as market efficiency is something that requires vigilance from all participants, much like liberty.

At the source of every error which is blamed on the computer you will find at least two human errors, including the error of blaming it on the computer.