Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Re:No, not so much (Score 1) 255 255

But if you are merely becoming a pro at using that 1 tool you are likely not thinking past how to use that tool.

True, but the problem is employers define jobs in terms of tool use. You can be good at JavaScript and happy manipulating the DOM to your heart's content, but if you don't have node.js or some other library/API on your resume' they won't look at you.

To give an idea of how bizarre it has gotten, I'm seeing a ridiculous number of job ads for senior software positions that list "git and GitHub" as either requirements or nice-to-haves. To me that's like asking for the ability to use a pencil and paper in an engineering design position. Anyone remotely qualified will have said experience, or be able to come up to speed on it in a day or three. It's like HR just has to make that list of tools as long as humanly possible.

Take anyone who has used Mercurial or any other modern distributed source control system and sit 'em in front of git and they'll be fine within a very short time. Take anyone who has used Eclipse and sit 'em in front of Visual (or vice versa) and they'll be able to do the job adequately almost immediately. They won't know all the stupid Visual tricks that someone who has used it since 6.0 days knows, but so what?

And if a person is not capable of that, you've made a bad hire, because technology and tools change all the time, and if the can't adapt to your toolset they won't be able to adapt to the future. So there is absolutely no loss to a company in hiring someone unfamiliar with their specific tooling. There might even be a gain, because if they fail to adapt they can be let go painlessly while still on probation.

So long as companies continue to use toolprint matching for hiring, schools will focus on teaching the tool-du-jour.

Comment: Missing calibration data, not drivers (Score 1) 253 253

The summary, as usual, is terrible. The missing files were calibration data for the engine controllers, not executables of any kind.

However, the article says some astonishingly stupid things, like: "'Nobody imagined a problem like this could happen to three engines,' a person familiar with the 12-year-old project said."

Well, duh.

Since the human imagination is known to be almost completely useless as a tool for understanding reality or predicting the future, this has to be the most obvious observation since the dawn of time.

Anything that can happen, will. Since we have finite resources, we have to guess what is most likely to happen. If we have data, we can run predictive models to inform our guesses. The one thing we know with near-certainty is that what we imagine might happen is completely irrelevant to what will actually happen.

The human imagination is no better at understanding or predicting today than it was when people were imagining bloodletting balanced the humours. It makes as much sense mentioning it in this context as saying, "Our astrologers and scriers never saw this coming!"

Comment: Re:Projections based on what? (Score 1) 310 310

I'm pretty strongly supportive of both technological (nuclear, solar/storage) and political (carbon tax/tariff) approaches to climate change, but as a computational physicist I agree with your evaluation of models. They contain a lot of good science, but the non-physical parameterizations they depend on make them non-predictive, certainly with regard to the details of regional climates.

Unfortunately, this published dataset reflects the hubris of climate scientists that they actually have predictive models, and plays in to policy planners and the public's unsupported belief that climate models are good guides to local policy (as opposed to sufficient to say, "We really shouldn't be dumping gigatonnes of greenhouse gasses into the air regardless of the detailed consequences, because our economy is finely tuned to the current climate and even relatively small disruptions could do Very Bad Things.")

My prediction is that in 20 years time most of the predictions in these models will turn out to be badly wrong. It would be almost miraculous if models that parameterized away as much of the physics as our current ones do, and imposed important constraints like top-of-atmosphere heat balance by hand, came close to the real climate. No one who has spent their career modeling systems that can actually be tested in the lab believes anything other than this.

Comment: Hilarious (Score 1) 72 72

There is no shortage of Linux devs. If there were, two things would be true:

1) salaries for Linux developers would be going up

2) people with two decades of Linux development experience would have no trouble getting a job

Neither of these is true. Ergo, there is no shortage of good developers with Linux experience.

Pretty much every Linux job I've seen posted in the past few months requires (that is, not "nice to have" but "requires") a dozen other skills that make up a combined skill set that only one in a million people have. Got Linux experience plus sockets plus Python plus git (this is a clue to what's going on...) OK, you also need experience with OpenGL and have three years CG coding on major animation projects.

People aren't looking for workers, they're looking for replaceable parts. The "git" thing gives it away: rather than burn, I don't know, an hour or two teaching someone the basics of git, or asking them to read a book on it, they won't consider anyone who can't simply sit down and start working.

The specific-industry-experience requirements are likewise a give-away: it isn't enough to have 3D experience, it's gotta be in animation, or they won't touch you, because those skills, man, they aren't transferable in any way.

Bytes used in animation are totally different than bytes used in medical imaging, and your understanding of one kind of processing pipeline precludes you from learning any other. You'd have to unlearn all that other stuff to make room for the new, and it would be at least a couple of days before you're a 110% productive member of the team! We can't have that!

[This is a synthetic example of things I've seen over the years, but it's all too prevalent an attitude and seems to be getting worse, and all the while the whining about "no devs available" gets louder.]

Comment: Not exactly a reliable source (Score 1) 169 169

No one who knows anything about nuclear power is going to be "excited" by anything the BAS releases on the topic, because they are a purely political anti-nuclear organization with a radical anti-nuclear agenda.

Whatever they have released, the odds are so overwhelming that it's nothing but a propaganda tool in their war on nuclear energy--a war whose success has helped create our current climate crisis--that it isn't worth anyone's time to even look at.

Comment: Re:because it's cheap, and you're expendable (Score 1) 156 156

Companies that do this clearly don't care about productivity, because cost is only one part of the equation. No one who understands anything about business ever does anything because it costs less. They do things because the output per dollar spent is higher. If they are focused on cost, or do something imbecilic like think of their business in terms of "costs centres" and "profit centres" (hint: if it's necessary for your business it's a profit centre, since you can't generate a profit without it... if it isn't necessary for your business you shouldn't be doing it) they they aren't any good at running a business.

There can be reasons for putting people into one big room, and high-walled cubicals can be arranged to produce barely-sufficient privacy to get decent productivity at significantly lower cost, but none of these depend on cost. They depend on output per dollar.

Comment: "instead of air"??? (Score 1) 116 116

The Hycopter uses its frame to store energy in the form of hydrogen instead of air

This makes it sound like there are all kinds of quadcopters out there that are using air to store energy. This is news to me, although given the low density of compressed-air storage I'd be pretty surprised if it's true.

Anyone have any idea why anyone would say this, as opposed to "instead of batteries"?

Comment: Re:And the answer is... (Score 3, Insightful) 150 150

What makes the callers angriest? Call center employees who act like robots.

Also, hearing "We are receiving higher than usual call volume..." every single time you call anywhere for any reason. Nothing says "We are lying incompetents" more clearly.

Comment: Re:Pretty sure the heat death of the universe will (Score 2) 386 386

Are you sure? FooBar() and foobar() are different functions in C but the same function in Fortran, so calling foobar() from C in a fortran-compiled is probably not going to have the effect you intended if both FooBar() and foobar() are present in, if it is even possible at all (name-mangling might be happening).

I was writing Fortran/C multi-language applications twenty years ago, so yeah, despite a few issues this is easily possible. There was some weirdness, as I recall, because Fortran implicitly pushed the size of arrays onto the stack, so you had to do some fiddling to accommodate them. There were a few other minor issues, but what the GP said is essentially correct: Fortran is link-compatible with C. C++ mangles names, so unless functions are declared with C linkage (extern "C") all bets are off.

Pretty much any language can be interfaced with any other using tools like swig (dunno if anyone still uses that--it's been almost 10 years since I wrote any multi-language code, thankfully).

Comment: Re:Why non-conclusive? (Score 1) 65 65

Personally, when I gamble and end up about 3/4 of a million dollars in the hole, I assume that I lost.

That sounds more like a conclusion than an assumption.

But the question isn't "Who won?" It is: "On the basis of this result what can we say about who will win next time?"

I don't know what kind of measures they used, and there are a couple of links in this discussion to papers pointing out how problematic p-values are, but it is perfectly possible for the weaker competitor to win any given competition. All it requires is that the width of the performance distributions be large enough to give significant overlap between the players.

People who don't understand statistics are baffled by this. They see individual instances, but statistics is about distributions. We can, by measuring instances, make judgements about the distributions they are drawn from, and knowing about the distributions we can make predictions about future instances.

In the present case, it appears that the observed distribution of performance was such that it wasn't possible to distinguish clearly between the case where the computer is slightly better than the humans but the humans got lucky, and the case where the humans are definitely better than the computer.

Comment: Re:Around the block (Score 5, Insightful) 429 429

I may not know "what works", but I sure do know what won't.

Age is not a great arbiter of such things, but it's still true that without age there are some experiences that are hard to get.

I remember when "structured programming" was the silver bullet du jour. Then it was OO. Then it was Java (this is hard to believe, but really, Java was touted as the solution to all our ills, and people believed it for a while, rushing out to re-write perfectly good code in Java and frequently ruining it in the process) Today it's FP.

All of these, except maybe Java, brought some real good to the table. There were a variety of side-trends that never really got off the ground, at least as silver bullets, like 5GLS, whatever they are.

An older developer has had the opportunity to watch these decade-long trends and make better judgements about the curve of adoption. Will Haskell ever become a mainstream language? Nope, although it'll be used in some niche areas, the way SmallTalk still is. Will FP techniques and ideas filter in to all kinds of other languages? Well, duh. Already happening. Is it worth learning a little Haskell and maybe come category theory? Sure. You can do that even while thinking the claim "apart from the runtime, Haskell is functionally pure" is precisely as impressive as the claim "apart from all the sex I've had, I'm a virgin."

Not all older developers will get any utility out of their experience. Some become cynical and dismissive. A very, very few retain their naive enthusiasm for the Next Great Thing. But many of them have a nuanced and informed attitude toward new technology that makes them extremely valuable as the technical navigator for teams they're on.

Comment: Re:Depends how you evaluate the curve (Score 4, Insightful) 425 425

If you're looking for people who generate a profit from their time, the curve is almost certainly U-shaped based on my now not-so-light 30+ years in the trenches.

The skill distribution doesn't have to be U-shaped to produce a U-shaped distribution. All there has to be is a threshold of skill that must be reached to perform effectively:

I liken this to a wall-climbing task in an obstacle course: some combination of height/weight/strength is necessary to get over the wall. If you measure them individually you'll see broad distributions with soft correlations with ability to get over the wall (because short/strong/light people will be able to do it and tall/strong/heavy people will be able to do it, but short/strong/heavy people won't and tall/weak/light people won't, etc). The wall-climbing task requires the right combination of a small number of such skills to be over some threshold. This trivially (as the simple model in the link shows) generates the observed U-shaped distribution in programming outcomes.

People who claim that anyone can be taught to code well enough to pass a first year computer science course have the opportunity to make a very simple, compelling argument in favour of their thesis: tell us how to teach people to program! If you can do that--if you can get rid of the U-shaped mark distribution that has been documented in first year computing for decades despite all kinds of efforts to understand and deal with it, your argument will be made. Everything else is just hot air: ideological and unconvincing.

There are certain things we know do not cause the bimodal mark distribution in first year computing:

1) Bad teaching (because the issue has been researched and any number of caring, intelligent teachers have thrown themselves at it, and anyone's sucky first year computing prof does not disprove this)

2) Language (because the bimodal mark distribution persists in all languages)

3) Years of coding experience of incoming students (because if that were the case it would have been identified as the differentiator in the decades of research that have gone into this: someone with no coding experience can do as well as someone with years... if they are over some threshold of skill.)

So while it's fun to watch equalitarian ideologues tub-thump this issue, they unfortunately bring nothing to the discussion but ideological blather. The U-shaped, bimodal, mark distribution in first-year computing is robust evidence of a threshold of some kind that people have to be over to code well. There may be other thresholds higher up the scale (I've seen estimates that 25% of coders will never get OO... god knows what the figure is for FP, which I'm still struggling with myself.) But the claim "It would be dreadful if everyone can't code!" is not an argument, it's an emotional outburst, and we need to focus on the data, not the way we wish the world is.

Personally, I would love it if we could figure out how to teach coding better. I see journalists, economists, politicians, business-people, all sorts who are dependent on coders to help them out on the most rudimentary questions. If we could teach everyone to code the level of data-driven discourse would go through the roof. But I'm not counting on that happening any time soon.

Comment: Re:Do electrons vibrate? (Score 4, Informative) 27 27

Do electrons actually vibrate?


The electrons emit cyclotron radiation, because they are being accelerated by a magnetic field. The acceleration is always perpendicular to the electron's velocity vector, so they don't speed up, they just turn in a circle. However, all accelerating charges emit electromagnetic radiation, and in the case of an electron moving in a magnetic field in this fashion it is called "cyclotron radiation". In other contexts it is called "bremsstrahlung", and so on. Physicists often have multiple names for the same basic phenomenon manifesting itself in different circumstances.

Add "electrons vibrate" or "everything vibrates" to this account adds nothing and obscures the actual source of the radiation, which is continuity conditions on the electro-magnetic field. These conditions are described by Maxwell's Equations, which predict such radiation. There is exactly nothing in Maxwell's Equations that could be said to describe a "vibrating electron" in this context.

The summary is equivalent to an account of a baseball game written by someone who has never seen a ball, or a game, of any kind. It is depressing that "science journalism" scrapes along at a standard that is an order of magnitude below anything found in sports journalism, which is itself not exactly a paragon of insight and coherence.

The paper itself can be found here:

It is a beautiful piece of work that really does open up new doors to precision measurement of beta spectra.

Comment: Re:Baptists are already writing this week's sermon (Score 3, Insightful) 69 69

The headline should really read 3.46-Billion-Year-Old 'Fossils' May Not Have Been Created By Life Forms.

And then apply the rule that "may" and "may not" have exactly the same literal meaning. Any headline that contains anything like "may" or "may not" is screaming sensationalism. "Scientists dispute oldest fossils" is informative, "Fossils may not have been created by life" is identical to "Fossils may have been created by life", and is therefore meaningless.

Comment: Re:The problem isn't intelligence - per se (Score 4, Interesting) 385 385

Intelligence in the intellectual, logical reasoning sense is a evolutionary epiphenomenon. It is only weakly selected for. We can tell this because its distribution in the population is so broad. There are no gazelles that run at half the speed of the fastest[*] but there is no shortage of people with IQs that are half the top and still manage to get along (putting "the top" at around 160 and "the bottom" around 80, which is the lower end of the "gets along OK in society most of the time" range.)

Logical, linear reasoning is a trick we've managed to train our bear to dance.

Some people happen to be really good at it. This can be a problem for them because so much of what humans do, and the accounts they give of it, make very little sense to the untutored mind.

We live in the Age of Bayes, and the Bayesian Revolution over the past thee hundred years (which takes in a lot of time before Bayes himself or the recognition that what we were doing is fundamentally Bayesian) has taught us some really important lessons about ourselves. Mostly how damned stupid we have been, even the highly intelligent. We've spent centuries arguing nonsense, from how three is equal to one for large values of three to the dharma of the tao.

In the past century or so we've been calling out the people who are most "intellectually gifted" and expecting them to solve our problems (in a past age it was the pious, or the people "of good family", etc). This has created a bind for them, because for most of that time we've also had no idea why people do what they do (spoiler: mate competition and selection play large roles, although we are still a long way from any kind of comprehensive understanding.)

There are also ethical constraints on what can be done to solve human problems. The utopian projects of the 20th century, despite their profound irrationality in so many respects, were manifestations of this belief that the human intellect had all the right tools for the job of reforming the planet. It didn't work, and that leaves us in the situation we are in today, where intellect is suspect as well as desired.

As such, it isn't necessarily a shock that people identified as "intellectually gifted" should feel less adequate after exemplary lives. Nor is it likely that's going to change any time soon, as we continue to look to the intellectually gifted to save us from ourselves, while steadfastly refusing to spend any time looking hard in a mirror for the source of most human problems.

[*] this may be false... feel free to fact-check me!

Overload -- core meltdown sequence initiated.