You sure remember the Fox incarnation.
You sure remember the Fox incarnation.
Good! I feel better.
5 years experience in a technology that has only existed for 14 months and cannot be taught in a classroom outside of business anyways. The requirements are way past ridiculous and border on the insane.
There's a "shortage" of good liars. I know a guy who was a fantastic BS-er that way. He had a network of fake references, for example. "Sure, he was doing Java for us in 1989. We used the first beta out. And he used Silverlight when it was still Bronzelight."
I felt too slimy to copy his techniques, but in a competitive world where a position receives hundreds of resumes, it's "survival of the fibbist", I hate to say.
Ego comes first...The unqualified never know that they are unqualified. It's just a bunch of meanies [to them], picking on them.
Heaven forbid if we ever got a president like that.
Or, white men are conditioned to an environment of abrasive competition, and not to complain about such behavior.
In a more general sense, different cultures value different things in different proportions, and that is going create conflict. "X people don't do enough Y" and/or "X people do too much Z".
Our egos make our own culture the center of the universe, and we try to shape the universe in our image. A recipe for conflict.
Most women I've known put money far above men's looks. If I had to use a point system, I'd assign it as such:
Earning power/potential: 60 pts.
Protecting and caring: 30 pts.
Looks/muscles: 10 pts.
True, the muscle part could be seen as "protecting and caring". I'm rather large in general such that perhaps that part mostly took care of itself despite me NOT resembling a super-hero. A man small or slight in stature may need muscles or karate skills to make up that portion of the report card.
Women want to be able to walk down the street at night with their guy and feel safe. There are different ways to achieve that. Some men fake it well with pure attitude.
Grace Hopper did not invent COBOL
COBOL was ultimately designed by a committee, but Grace's early compilers had a lot of influence on the language design.
The military and other organizations found it difficult to build financial, logistics, and administrative software for multiple machines that each speak a different language, and thus formed a joint committee to create a common language. (Science and research institutions had FORTRAN for cross compatibility.)
Basically the COBOL committee grew sour with disagreement. As the deadline for the first specification approached, the committee fractured into a "git'er done" tribe, and a "do it right" tribe. The git-er-done tribe basically cloned Grace's language with some minor changes and additions because they knew they didn't have time to reinvent the wheel from scratch. Grace's language was road-tested.
As the deadline came up, the git-er-done group were the only tribe with something close to ready, and so that's the work the committee heads ultimately submitted. There were a lot of complaints about it, but the heads wanted to submit something rather than outright fail. (The story varies depending on who tells it.)
Later versions of COBOL borrowed ideas from other languages and report-writing tools, but the root still closely mirrored Grace's language. Therefore, it could be said that Grace Hopper's work had a large influence on COBOL.
(It's somewhat similar to the "worse is better" story between Unix/C and a Lisp-based OS: http://dreamsongs.com/WorseIsB... )
- - - - - - -
As far as what orgs should do about existing COBOL apps, it's not realistic to rewrite it all from scratch, at least not all at once. That would be a long and expensive endeavor. It's probably cheaper to pay the higher wages for COBOL experts, but also gradually convert sub-systems as time goes on.
However, whatever you pick as the replacement language could face the same problem. There's no guarantee that Java, C#, Python, etc. will be common in the future. Rewriting COBOL into Java could simply be trading one dead language for another.
I know shops that replaced COBOL "green screen" apps with Java Applets. Now Java Applets are "legacy" also. (And a pain to keep updating Java for.)
Predicting the future in IT is a dead-man's game. At least the COBOL zombie has shown staying power. If you pick a different zombie, you are gambling even more than staying with the COBOL zombie.
If it ain't broke, don't fix it. If it's half-broke, fix it gradually.
There is indeed more social pressure on men to be the bread winners, similar to how women are pressured to look attractive. And thus we'd expect young men to work harder and longer to try to get the promotions. If you are pressured by society to do X, you are more likely to do X.
It may not be "fair", but that's society as-is. A quota system doesn't factor this in.
perhaps companies ARE mistreating women and minorities which WOULD make it the company's fault
The company can't force employees to like someone. If there are known incidents, they can perhaps do something, but most "mistreatment" is subtle and/or unrecorded. The organization cannot micromanage social encounters at that level.
In general, many people are tribal jerks. I've had white colleagues who told about mistreatment when they worked with a uniform non-white group, such as all Asian. The "minority" is often targeted. Sometimes it's driven by resentment of "white culture" discriminating against them in general. They channel that frustration into an individual who happens to be white.
I'm not sure how to fix this because it's probably fundamental to human nature. Mass nagging about "being good" only goes so far. If you over-nag, people often do the opposite as a protest to the nagging. (Is the word "nagging" sexist?)
forcing yourself into a pure functional style means that your code can run anywhere because it doesn't care about the context in which it runs.
Most planners focus on the current needs, not future needs. Whether that's rational (good business planning) is another matter. It's generally cheaper to hire and get future maintenance on procedural code. If and when machine performance issues override that, the planners may THEN care.
It depends on the project. If most of the processing for an app is happening on servers in cloud centers, then it's probably already parallelizing user sessions. Parallelizing at the app level thus won't really give you much more parallelism potential, especially if most of the data chomping is done on the database, which should already be using parallelism. Web servers and databases already take advantage of parallelism. It would thus be diminishing returns to parallelize the app level. If the code is looping on 10,000 items in app code itself, the coder is probably do something wrong.
The industry learned the hard way that OOP works well for some things but not others. For example, OOP has mostly failed in domain modeling (modeling real-world "nouns") but is pretty good at packaging API's for computing and networking services.
The industry may also learn the hard way where FP does well and where it chokes. Some understandably don't want to be the guinea pigs.
If you work on Chevies when everyone else is working on Fords, then you may have difficulty finding mechanics for your customized Chevies. I've yet to see evidence FP is objectively "better" in enough circumstances and niches to justify narrowing staffing options except for certain specialties. (I've complained about lack of practical demonstrations elsewhere on this story.)
A quick search of the journals will show thousands of samples from various time periods tested with the method.
Example near the ranges I mentioned?
Addendum and corrections:
The "actor model" seems pretty close to event driven programming. I don't know the official or exact difference. But my key point is that the event handling programming and interface is procedural. The only non-procedural aspect may be that requests for further actions may need a priority value (rank) and to be submitted to a request queue. For example, a game character may request a "shoot arrow" event on their part as a follow-up. But the event handler writer doesn't have to concern themselves with the direct management of the event-request-queue.
"Any more than...query writers...don't have to know..." should be "Any more than...query writers...have to know...". Remove "don't"
is more important now because of the trend towards more and more cores
But the bottleneck is not CPU itself for a good many applications. And specialized languages or sub-languages can handle much of the parallelism. If I ask a database to do a sort, it may use parallelism under the hood, but I don't have to micromanage that in most cases: I don't care if the sort algorithm uses FP or gerbils on treadmills. Similar with 3D rendering: the designer submits a 3D model with polygons and texture maps, and a rendering engine micromanages the parallelism under the hood. That "root engine" may indeed use FP, but the model maker doesn't have to know or care.
And event-driven programming can hide that fact that parallelism is going on, or at least provide a non-FP interface. For example, in a game, a character may be programmed by how they respond to various events. There may be 10 events going on at once, but the app dev is only focusing on one at a time per character or character type. Granted, it may take better languages or API's to abstract parallelism well. The "root engines" may make heavy use of FP, such as the database, rendering, and event handling engines, but the 2nd level, typically called "application developers", probably won't need that any more than SQL query writers (usually) don't have to know how the database engine was written. But only time will tell...
Frankly, Scarlett, I don't have a fix. -- Rhett Buggler