Someone wrote that grading on a curve works in academia but not in industry. Why should it work for grading exams when it doesn't for ranking the workers? Especially the academics that are using it should know better.
The use of a curve in academia is more practical because the student's primary output is the grade, which is numeric. In contrast, the primary output of an employee is the work they do, which can only be (poorly) measured by metrics. Whether or not it's a good idea is a separate question though.
Finally, the second fallacy why this is fundamentally broken is the assumption that the skill distribution in a work team or class is normal (follows a bell curve). There is absolutely no guarantee of that, because, heck, you aren't hiring the idiots, are you? I am sure that the company is hiring only "rock star" developers. Same with the students - they have to pass stringent exams and fulfill admission criteria that the majority of the population isn't able. So you have a sample here that isn't representative of the entire population (where the bell curve would be valid) and all bets are off, because the system was built on an invalid assumption.
The most extreme example of this is the constant distribution - the case when all students turn in blank sheet of paper (identical "skill" level) for their exam and still pass. You would have to pick the students or hire employees randomly out of the entire population if you wanted to have a normal distribution of skill. Not very practical, though.
This isn't quite true, and seems to be based on the idea that people are reducible to one-dimensional numbers. Yes, the ability of the individuals (as measured by the admission/hiring process) will be a truncated bell curve (the highest N candidates from the applicant distribution). But the quality of the work done will be normally distributed, because there are countless other factors that contribute to the result. The only exception to this is when they operate collectively to alter the distribution, as in the example you gave above.
My opinion on the subject (as a student) is that relative grades are somewhat useful, since they help to normalize for the difficulty of different units, which would otherwise penalize students who took harder units. However, the scaling or change in grades should be monitored, since a change of more than 15% indicates something very wrong with the unit.
Perhaps more fundamental is the idea that the grade distribution should only be translated, not made to fit any particular distribution. This ensures that the average mark can be adjusted, while ensuring that the relative grades are retained.
Another line of thought is that scaling should only ever increase marks, not decrease them, so as to avoid demotivating students. Increasing the difficulty of the unit in future years is the preferable solution for that.