Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:because it's cheap, and you're expendable (Score 1) 156

Companies that do this clearly don't care about productivity, because cost is only one part of the equation. No one who understands anything about business ever does anything because it costs less. They do things because the output per dollar spent is higher. If they are focused on cost, or do something imbecilic like think of their business in terms of "costs centres" and "profit centres" (hint: if it's necessary for your business it's a profit centre, since you can't generate a profit without it... if it isn't necessary for your business you shouldn't be doing it) they they aren't any good at running a business.

There can be reasons for putting people into one big room, and high-walled cubicals can be arranged to produce barely-sufficient privacy to get decent productivity at significantly lower cost, but none of these depend on cost. They depend on output per dollar.

Comment "instead of air"??? (Score 1) 116

The Hycopter uses its frame to store energy in the form of hydrogen instead of air

This makes it sound like there are all kinds of quadcopters out there that are using air to store energy. This is news to me, although given the low density of compressed-air storage I'd be pretty surprised if it's true.

Anyone have any idea why anyone would say this, as opposed to "instead of batteries"?

Comment Re:Pretty sure the heat death of the universe will (Score 2) 386

Are you sure? FooBar() and foobar() are different functions in C but the same function in Fortran, so calling foobar() from C in a fortran-compiled libfb.so is probably not going to have the effect you intended if both FooBar() and foobar() are present in libfb.so, if it is even possible at all (name-mangling might be happening).

I was writing Fortran/C multi-language applications twenty years ago, so yeah, despite a few issues this is easily possible. There was some weirdness, as I recall, because Fortran implicitly pushed the size of arrays onto the stack, so you had to do some fiddling to accommodate them. There were a few other minor issues, but what the GP said is essentially correct: Fortran is link-compatible with C. C++ mangles names, so unless functions are declared with C linkage (extern "C") all bets are off.

Pretty much any language can be interfaced with any other using tools like swig (dunno if anyone still uses that--it's been almost 10 years since I wrote any multi-language code, thankfully).

Comment Re:Why non-conclusive? (Score 1) 65

Personally, when I gamble and end up about 3/4 of a million dollars in the hole, I assume that I lost.

That sounds more like a conclusion than an assumption.

But the question isn't "Who won?" It is: "On the basis of this result what can we say about who will win next time?"

I don't know what kind of measures they used, and there are a couple of links in this discussion to papers pointing out how problematic p-values are, but it is perfectly possible for the weaker competitor to win any given competition. All it requires is that the width of the performance distributions be large enough to give significant overlap between the players.

People who don't understand statistics are baffled by this. They see individual instances, but statistics is about distributions. We can, by measuring instances, make judgements about the distributions they are drawn from, and knowing about the distributions we can make predictions about future instances.

In the present case, it appears that the observed distribution of performance was such that it wasn't possible to distinguish clearly between the case where the computer is slightly better than the humans but the humans got lucky, and the case where the humans are definitely better than the computer.

Comment Re:Around the block (Score 5, Insightful) 429

I may not know "what works", but I sure do know what won't.

Age is not a great arbiter of such things, but it's still true that without age there are some experiences that are hard to get.

I remember when "structured programming" was the silver bullet du jour. Then it was OO. Then it was Java (this is hard to believe, but really, Java was touted as the solution to all our ills, and people believed it for a while, rushing out to re-write perfectly good code in Java and frequently ruining it in the process) Today it's FP.

All of these, except maybe Java, brought some real good to the table. There were a variety of side-trends that never really got off the ground, at least as silver bullets, like 5GLS, whatever they are.

An older developer has had the opportunity to watch these decade-long trends and make better judgements about the curve of adoption. Will Haskell ever become a mainstream language? Nope, although it'll be used in some niche areas, the way SmallTalk still is. Will FP techniques and ideas filter in to all kinds of other languages? Well, duh. Already happening. Is it worth learning a little Haskell and maybe come category theory? Sure. You can do that even while thinking the claim "apart from the runtime, Haskell is functionally pure" is precisely as impressive as the claim "apart from all the sex I've had, I'm a virgin."

Not all older developers will get any utility out of their experience. Some become cynical and dismissive. A very, very few retain their naive enthusiasm for the Next Great Thing. But many of them have a nuanced and informed attitude toward new technology that makes them extremely valuable as the technical navigator for teams they're on.

Comment Re:Depends how you evaluate the curve (Score 4, Insightful) 425

If you're looking for people who generate a profit from their time, the curve is almost certainly U-shaped based on my now not-so-light 30+ years in the trenches.

The skill distribution doesn't have to be U-shaped to produce a U-shaped distribution. All there has to be is a threshold of skill that must be reached to perform effectively: http://www.tjradcliffe.com/?p=...

I liken this to a wall-climbing task in an obstacle course: some combination of height/weight/strength is necessary to get over the wall. If you measure them individually you'll see broad distributions with soft correlations with ability to get over the wall (because short/strong/light people will be able to do it and tall/strong/heavy people will be able to do it, but short/strong/heavy people won't and tall/weak/light people won't, etc). The wall-climbing task requires the right combination of a small number of such skills to be over some threshold. This trivially (as the simple model in the link shows) generates the observed U-shaped distribution in programming outcomes.

People who claim that anyone can be taught to code well enough to pass a first year computer science course have the opportunity to make a very simple, compelling argument in favour of their thesis: tell us how to teach people to program! If you can do that--if you can get rid of the U-shaped mark distribution that has been documented in first year computing for decades despite all kinds of efforts to understand and deal with it, your argument will be made. Everything else is just hot air: ideological and unconvincing.

There are certain things we know do not cause the bimodal mark distribution in first year computing:

1) Bad teaching (because the issue has been researched and any number of caring, intelligent teachers have thrown themselves at it, and anyone's sucky first year computing prof does not disprove this)

2) Language (because the bimodal mark distribution persists in all languages)

3) Years of coding experience of incoming students (because if that were the case it would have been identified as the differentiator in the decades of research that have gone into this: someone with no coding experience can do as well as someone with years... if they are over some threshold of skill.)

So while it's fun to watch equalitarian ideologues tub-thump this issue, they unfortunately bring nothing to the discussion but ideological blather. The U-shaped, bimodal, mark distribution in first-year computing is robust evidence of a threshold of some kind that people have to be over to code well. There may be other thresholds higher up the scale (I've seen estimates that 25% of coders will never get OO... god knows what the figure is for FP, which I'm still struggling with myself.) But the claim "It would be dreadful if everyone can't code!" is not an argument, it's an emotional outburst, and we need to focus on the data, not the way we wish the world is.

Personally, I would love it if we could figure out how to teach coding better. I see journalists, economists, politicians, business-people, all sorts who are dependent on coders to help them out on the most rudimentary questions. If we could teach everyone to code the level of data-driven discourse would go through the roof. But I'm not counting on that happening any time soon.

Comment Re:Do electrons vibrate? (Score 4, Informative) 27

Do electrons actually vibrate?

No.

The electrons emit cyclotron radiation, because they are being accelerated by a magnetic field. The acceleration is always perpendicular to the electron's velocity vector, so they don't speed up, they just turn in a circle. However, all accelerating charges emit electromagnetic radiation, and in the case of an electron moving in a magnetic field in this fashion it is called "cyclotron radiation". In other contexts it is called "bremsstrahlung", and so on. Physicists often have multiple names for the same basic phenomenon manifesting itself in different circumstances.

Add "electrons vibrate" or "everything vibrates" to this account adds nothing and obscures the actual source of the radiation, which is continuity conditions on the electro-magnetic field. These conditions are described by Maxwell's Equations, which predict such radiation. There is exactly nothing in Maxwell's Equations that could be said to describe a "vibrating electron" in this context.

The summary is equivalent to an account of a baseball game written by someone who has never seen a ball, or a game, of any kind. It is depressing that "science journalism" scrapes along at a standard that is an order of magnitude below anything found in sports journalism, which is itself not exactly a paragon of insight and coherence.

The paper itself can be found here: http://arxiv.org/pdf/1408.5362...

It is a beautiful piece of work that really does open up new doors to precision measurement of beta spectra.

Comment Re:Baptists are already writing this week's sermon (Score 3, Insightful) 69

The headline should really read 3.46-Billion-Year-Old 'Fossils' May Not Have Been Created By Life Forms.

And then apply the rule that "may" and "may not" have exactly the same literal meaning. Any headline that contains anything like "may" or "may not" is screaming sensationalism. "Scientists dispute oldest fossils" is informative, "Fossils may not have been created by life" is identical to "Fossils may have been created by life", and is therefore meaningless.

Comment Re:The problem isn't intelligence - per se (Score 4, Interesting) 385

Intelligence in the intellectual, logical reasoning sense is a evolutionary epiphenomenon. It is only weakly selected for. We can tell this because its distribution in the population is so broad. There are no gazelles that run at half the speed of the fastest[*] but there is no shortage of people with IQs that are half the top and still manage to get along (putting "the top" at around 160 and "the bottom" around 80, which is the lower end of the "gets along OK in society most of the time" range.)

Logical, linear reasoning is a trick we've managed to train our bear to dance.

Some people happen to be really good at it. This can be a problem for them because so much of what humans do, and the accounts they give of it, make very little sense to the untutored mind.

We live in the Age of Bayes, and the Bayesian Revolution over the past thee hundred years (which takes in a lot of time before Bayes himself or the recognition that what we were doing is fundamentally Bayesian) has taught us some really important lessons about ourselves. Mostly how damned stupid we have been, even the highly intelligent. We've spent centuries arguing nonsense, from how three is equal to one for large values of three to the dharma of the tao.

In the past century or so we've been calling out the people who are most "intellectually gifted" and expecting them to solve our problems (in a past age it was the pious, or the people "of good family", etc). This has created a bind for them, because for most of that time we've also had no idea why people do what they do (spoiler: mate competition and selection play large roles, although we are still a long way from any kind of comprehensive understanding.)

There are also ethical constraints on what can be done to solve human problems. The utopian projects of the 20th century, despite their profound irrationality in so many respects, were manifestations of this belief that the human intellect had all the right tools for the job of reforming the planet. It didn't work, and that leaves us in the situation we are in today, where intellect is suspect as well as desired.

As such, it isn't necessarily a shock that people identified as "intellectually gifted" should feel less adequate after exemplary lives. Nor is it likely that's going to change any time soon, as we continue to look to the intellectually gifted to save us from ourselves, while steadfastly refusing to spend any time looking hard in a mirror for the source of most human problems.

[*] this may be false... feel free to fact-check me!

Comment Re:What the fuck are you talking about? (Score 0, Offtopic) 385

Their high priests and emperors would cut the hearts out of living individuals, and then make those victims eat their own still-beating hearts before burning them.

Your slip into hyperbole here is not helping your case, which is otherwise pretty accurate.

The human heart is very well protected. Humans only have ten of fifteen seconds of consciousness without blood flow. Even granted they were using stone knives, which are insanely hard and sharp, cutting through the rib cage, severing the aorta, the vena cava and the pulmonary veins and arteries is not the work of ten or fifteen seconds.

It is also likely that the victims were too busy screaming to be properly said to eat anything.

So while the New World was in fact dominated by a blood cult that was carried out more formally in the politically organized areas, and it is not impossible that a few still-beating hearts were shoved into a few still-working mouths, the ritual of "feeding the victim their own heart" was a ritual, not a literal thing, and is best described as such.

You probably know all that, but the people who believe the myths about non-European cultures likely don't.

The blood cult was practiced all over the New World, much as the Norse Pantheon is recognizably related to the Sumerian one. Ideas travel. So even amongst the pre-political peoples of what is now Canada the practice of ritual torture, sacrifice and cannibalism was common, as was the denuding of entire landscapes for the sake of game.

The notion that North American native peoples lived in any kind of harmony with nature is simply false. We have overwhelming archeological and ethnographic evidence to the contrary, and anyone who believes otherwise is engaging in Creationist levels of evidence-denial.

Comment Re:The third factor (Score 4, Interesting) 385

You've likely encountered this quote, but it bears repeating:

Nothing in the world can take the place of Persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent. The slogan 'Press On' has solved and always will solve the problems of the human race. -- Calvin Coolidge, 30th president of US (1872 - 1933)

Comment Re: Dark matter doesn't exist. (Score 2) 117

One only needs to define the photon as a thermodynamic reexpansion of spacetime that was compressed by nearby matter.

Unfortunately that is not a meaningful statement. I have no idea what a "thermodynamic reexpansion" is versus a "non-thermodynamic reexpansion", for example. Nor is it clear how this would be expressed mathematically as a generalization of Maxwell's equations. Nor does your paper do anything more than repeat this meaningless statement.

There may be something meaningful and interesting to say about the thermodynamics of electromagnetism and space-time, but until you give us a mathematical statement of the physical principles you are trying to enunciate it is going to be very difficult for anyone to understand what, if anything, you are talking about.

Comment Re:How have we ruled out measurement or model erro (Score 1) 117

I'm waiting for someone to explain why so many seem so sure that it actually is some form of exotic matter.

You'll forgive me for believing that that is a lie, because this has been explained many, many times. On the balance of probabilities, you are an irrational nutjob who is resistant to any actual explanation or evidence.

That said, I'll waste few minutes of my precious time pretending your question is sincere and you have a non-zero chance of changing your mind.

The reason why we focus on exotic matter is because observational evidence for a source of anomalous gravitational attraction is robust and diverse and alternative theories have either failed to account for it, or have failed other observational tests.

It isn't as if we have a single measurement on one system. We have detailed measurements of the rotation curves of many spiral galaxies. We have the motion of galaxies in clusters. We have the motion of clusters themselves. We have gravitational lensing studies--which probe the dark matter distribution in a completely different way from dynamical studies. We have cosmological simulations that can't explain galaxy formation without dark matter. We have structure in the cosmic microwave background that is evidence for dark matter, in that it can be explained easily with it, but only with great difficulty without, just as hearing a dog bark is evidence for a dog because a dog easily explains barking, while alternative explanations have much lower priors and so are less plausible. To deny this is to deny Bayes.

Did I have to dig deeply into some mysterious literature to find this? No. I had to look at Wikipedia: https://en.wikipedia.org/wiki/...

Do you see why I think your question is dishonest?

So that makes "maybe the measurements are in error" much less plausible than "dark matter exists".

With regard to new physics, the problem is that the low-hanging fruit have been picked, and what remains has a hard time explaining all the diverse observational evidence. It is hard to find a theory that explains all the phenomena that are observed that is not "there is some kind of exotic matter out there". None-the-less, we are actively testing a few such theories. Again from Wikipedia: https://en.wikipedia.org/wiki/...

So now your question has been answered. You need wait no more. You can either change your mind, and agree that dark matter is the most plausible explanation of the robust and diverse observations, or you can explain why you find some alternative hypothesis more plausible. But you can never again honestly ask, "Why don't people take observational error or alternative theories more seriously?"

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...