Comment Re:Agree about U curve, disagree with the rest (Score 1) 425
The whole discussion about factors is wrong. A bad coder (and there are a lot of those around) has _negative_ productivity!
The whole discussion about factors is wrong. A bad coder (and there are a lot of those around) has _negative_ productivity!
I do not agree. In fact, I have been a 10x coder on occasion. The thing is not that I am extremely good, the thing is that I write very solid code that does not have to be thrown away after a short time or causes more problems than it solves. And I am not that much slower doing it than others are churning out trash. A better explanation would be that most coders are in fact 0.1x coders and should not have gotten into the profession in the first place due to lack of dedication, talent and skill. Then any good coder becomes an 1x coder and that is a far more accurate way of seeing things. An 1x coder in this sense is just somebody that delivers a solid result with good architecture, design and implementation in reasonable time. Most coders cannot do that.
I guess this "10x coder" nonsense comes basically from management being unable that they should never have hired the bad ones. Calling them 0.1x coders makes it amply clear who is responsible. Calling them 1x coders hides that management did screw up badly in hiring them.
It is actually not true. Sure, exams tend to the Gaussian distribution, as the average students must pass somehow. But any well-designed technical exam will have a bi-modal distribution unless everybody is good at it. The thing is that designing exam questions for this is hard: You need to make questions that require an insight in order to be solvable, but once that insight has been had, the answer must be easy.
Potential programming skill is very much bi-modal, just like any other skill that requires abstract thinking. You have it, then learning makes sense, or you do not. Same with most engineering fields, mathematics or physics. Of course, the rockstar vs. cretin comparison is utter nonsense, and just dishonest propaganda language. You just get two Gaussian distributions with not much in-between. "Rockstars" are still rare (and generally not very useful). I have seen this time and again in fellow students, as a teacher, as reviewer etc.
The article is another transparent attempt to make people believe coding is like manual labor and hence should be in the cheapest salary class possible.
I gave a literal quote from the abstract. Can you read?
And where did you fail basic arithmetic?
Obviously you do not qualify as "digital native" as you have seen through the scam and have moved on.
What I find when reviewing projects or analyzing problems is that the young engineers often have serious trouble seeing what is under the surface. All those "frameworks" and tools are standing in their way and it becomes very hard to understand what is going on form them. This results in bad performance, huge bloat, insecurity and bad reliability.
What the IT field has really missed and the younger generation is suffering the effects, is that fundamentals are important and if you do not understand them, then your product will suck. Sure, if you want to push out crappy software fast, get some young morons that have high intelligence but no experience and actual insight. But if you want to get the cost of IT down by actually solving problems and having them stay solved, make sure a significant part of your engineers are experienced and knowledgeable. Yes, that means getting older folks into the teams. Don't get me wrong. Many older folks are just as incompetent. But experience can _only_ be gotten among older folks, as it takes time to acquire.
While I did not read the paper, this is rather damning. It means these people do not qualify as scientists or even intelligent human beings, as such a test is very conclusive. Have the dummy and real item perform the same? Then the "real item" is not real at all. There is no other valid conclusion.
All the proponents of this "device" are just an example of how incompetent and delusional humans can get. From the NASA publication abstract: "Thrust was observed on both test articles, even though one of the test articles was designed with the expectation that it would not produce thrust. Specifically, one test article contained internal physical modifications that were designed to produce thrust, while the other did not (with the latter being referred to as the "null" test article)."
Listen up kids, this means that they tested the "true em-drive" and a dummy and _both_ gave them thrust. The dummy is specifically designed so that it _cannot_ do this! This means the "thrust" comes from some other effect, not the "em-drive". That truly and utterly pathetic thing here is that NASA actually did sound science and people are missing the necessary reading comprehension skills to even understand the abstract.
It is no surprise you do not understand what I say, your mind is foggy and blurred.
"Published" and published are two different things. Where is that peer-reviewed paper in a respected journal? Where are the papers describing independent verification?
Only problem is that buying companies that already do it does not create any jobs.
Fast, cheap, good: pick two.