Weather is also a factor. Our power has gone out sporadically (usually preceded by the loud pop of something getting fried), and it usually comes back within an hour or two of my calling. Except during the Great Dallas Ice Storm which happens *every year* and which *nobody* seems to plan for. The last two years the power has been out for days, which is especially fun with an elderly invalid. The fact that the city/county response to ice is to toss some sand and ice on it, compounded with Dallasites inability to drive even during good weather, prolongs response times even further. But I still blame Oncor.
To my mind, the best managers set up conditions for their staff to succeed and defend them against higher-level managers. That's it. Managers should take care of head-count and hiring and meetings and all the other boring but apparently necessary stuff; they shouldn't be responsible for *any* analysis and design, let alone coding. Because all the "boring" stuff will eat up someone's time, and better the designated victim, I mean manager, than someone responsible for making progress.
The *worst* managers, in my not-too-atypical experience, are the rock-stars, micromanagers, timekeepers, and corporate climbers. True rock-star programmers should be writing code, not going to meetings
It depends which parts are "agile" and which are "waterfall". From my not-exactly-vast experience, whatever mix you choose has to address four concerns:
1. WHAT ARE YOU BUILDING? Seriously. This is where the extra planning of "waterfall" -- itself a misunderstanding of someone else's comments -- comes in. One of my first jobs was a derivatives trading app in a perpetual tug of war between an outspoken exotic derivatives trader, back office and compliance folks wanting to automate trade reconciliation, risk managers wanting to manage risk, and the rest of the trading desk who just wanted to price whatever deal they were trying to do (vanilla or not). The end result was a kludgey mess that everyone hated.
2. HOW DO YOU ALLOW FOR GROWTH? There are right ways and wrong ways. Agile has a good tip: build only what you need, and as cleanly as you can. Better said than done, of course, and sometimes (as the Pragmatic Programmers have said) you *know* there will be a database in there sooner or later, so plan your architecture accordingly even if it's a little awkward short-term. Then there's the wrong way, which I saw in one project: build a "flexible" architecture with "configurable" components so that you can do "anything"! 1) KNOW WHAT YOU'RE BUILDING, even in broad strokes (see above). 2) For software to be reusable, it must first be usable for *something*. 3) If your "flexible" and "configurable" architecture takes more time to modify than writing straightforward code would take, it has failed; start over. 4) It's better to use open-source architecture than build it yourself. (Buying is also better, but beware of vendor lock-in and every-increasing fees, especially if you only use a small part of an elaborate framework.)
3. HOW DO STAKE-HOLDERS REQUEST CHANGES? No specification will be adequate out of the gate, and unless you're working for NASA or the military people CAN and WILL request changes, sometimes while you're building the product and certainly once people begin using it. Do you jump when they say how high? Do you have a long and slow review process? Do you work individual tickets? Do you have potentially disruptive "projects", and if so how do you integrate them back into the main project? Different applications have different rates of change and different tolerance for defects, but it's best to work out change process before the complaints come in.
4. WHEN AND HOW DO YOU CUT YOUR LOSSES? Despite your best efforts the whole thing will become obsolete or unmaintainable, sooner or later (hopefully later). As in point #1, have some idea which you're building, and at what point do you deprecate it to build something better. (This is the hardest part for many organizations; why can't the floor wax also be a salad dressing?) At some point make a plan, refactor your code base -- over time! -- to separate the parts you'll keep from the parts you don't need, and toss out the latter. If a radical rewrite is the only way to go, bite the bullet and do it, with appropriate safe-guards like regression tests. If you can't evolve or replace your product, a competitor -- maybe within your own organization -- will do it for you.
That's my long-winded advice, from someone who's watched many "successful" and "unsuccessful" projects die. Nearly all in-house software is a Potemkin village designed to keep the tzars happy. If the tzars aren't happy, the fake village is set ablaze and you, the fake peasants, go to Siberia. The only real question is how to keep your frantic peasant dance going as long as possible without dying of exhaustion or being sent to Siberia.
TL;DR: Plan the limits, goals, requirements, and large-scale architecture up front (erroneously called "waterfall"). But planning never really ends; stay agile to correct your course or take advantage of opportunities
After rewarding Boeing and SpaceX with the contracts to build the spacecrafts NASA is now asking the companies to stop their work on the project.
The move comes after aerospace company Sierra Nevada filed a protest of the decision after losing out on the bid.
Sierra Nevada was competing against Boeing and SpaceX for a share of the $6.8 billion CCP contracts. The contracts will cover all phases of development as well as testing and operational flights. Each contract will cover a minimum of two flights and a maximum of four, with each agency required to have one test flight with a NASA representative on board.
On Sept. 16, NASA announced who the winners were of the Commercial Crew Transportation Capability (CCtCAP) contracts. Sierra Nevada then filed a protest with the GAO on Sept. 26, and issued a statement saying the protest was asking for: “a further detailed review and evaluation of the submitted proposals and capabilities.”
According to NASA’s Public Affairs Office, this legal protest stops all work currently being done under these contracts. However, officials have not commented on whether-or-not the companies can continue working if they are using private funds.
Sierra Nevada's orbiter resembles a mini space shuttle. That alone (remember the problems with the tiles) should have been enough to disqualify them."
"On average, girls are - for whatever reason - less interested in math, physics, chemistry.
The Code.org article said none of this. In fact, it freely acknowledged social pressures that discourage women from entering or staying in tech. It's not unreasonable to suppose stories from women in tech discourage the next generation from even attempting to enter computer-related fields. It helps to read the freaking article.
As others have said, people -- mostly male upper-class Europeans -- have used biology to justify slavery, denying women/minorities the vote, giving harsher sentences to black or Eastern European defendants, and so on. (And I'm not even Godwinning.) Read Steven J. Gould's _The Mismeasure of Man_, then read Carol Tavris's _The Mismeasure of Woman_.
Link to Original Source
Link to Original Source
Power grids, the internet and other networks often mitigate the effects of damage using redundancy: they build in multiple routes between nodes so that if one path is knocked out by falling trees, flooding or some other disaster, another route can take over. But that approach can make them expensive to set up and maintain. The alternative is to repair networks with new links as needed, which brings the price down – although it can also mean the network is down while it happens.
As a result, engineers tend to favour redundancy for critical infrastructure like power networks, says Robert Farr of the London Institute for Mathematical Sciences.
So Farr and colleagues decided to investigate which network structures are the easiest to repair. Some repairs just restore broken links in their original position, but that may not always be possible. So the team looked at networks that require links in new locations to get up and running again. They simulated a variety of networks, linking nodes in a regular square or triangular pattern and looked at the average cost of repairing different breaks, assuming that expense increases with the length of a rebuilt link."
Link to Original Source
*People* are different, and like different things. Men and women, however, aren't that different (roles in reproduction excepted), so a statistically significant difference points to a social or psychological cause, not biology.
That said, the PC isn't itself the problem, as the TFA -- or maybe just the summary -- seems to imply. Looking at other professions with gender imbalances, though, one can posit a few underlying causes. a) Secretaries were once men who helped important people with important matters; once the typewriter came in, women seized on typing as a "respectable" way to support themselves and the modern secretarial pool was born. (See http://www.stuffmomnevertoldyou.com/podcasts/why-is-secretary-the-most-common-job-for-women-in-the-u-s/) b) Blechley Park and earlier research projects employed female "computers" before they developed electric ones because women worked hard and worked cheap. All the mathematical whizzes, however, were upper-class men; who would pay for a woman's education, when they would just get married and pop out kids? (See also Disney animators.)
Obviously somebody needs to do solid research, but one could hypothesize that the PC coincided with three trends: the growth of male-dominated "hacker" culture, the use of PCs by Serious Men for Serious Business, and the decline of mainframes (i.e. server rooms in which nobody knew or cared women worked). Without hard data, though, this is mere conjecture. Loads better than "women don't like computers", though.
Legacy properly describes a software system, not a language. Languages rise and fall in popularity. Sometimes a language has inherent limits, sometimes the implementation stinks, sometimes the syntax or paradigm no longer become fashionable. Sometimes languages and platforms disappear only to re-emerge years later. Back in the late 1990's NeXTSTEP/OPENSTEP was turning into a "legacy platform"
Stay in the industry long enough, you'll see everything come back.
Is this a valid analogy? In short, no. A bit longer answer: NOOOOOOO. For a full explanation, read on.
I can't speak to how construction works, but I know how software development and developers work. Usually software breaks not because of a bad developer, but because of integration issues and subtle interactions which are hard to detect, and even harder to assign "blame" to without a lot of investigation. The investigation is generally the hardest part, so you'll have to charge time already spent.
Worse, your boss is proposing a "blame game" where every defect is somebody's fault, almost always somebody on the current development team. Far from encouraging better software, this will keep developers from entering their own bugs (or any bugs) into the bug tracking system, and encourage finger-pointing rather than collaboration. Meanwhile, your boss thinks he'll save money by making developers work for free "on their own time". In the worst case, the person who touched a piece of code is IT, whether it's a legitimate mistake or a weird edge case. What you'll get is a workplace full of egos, fiefdoms ("don't mess up MY code"), and destructive competition.