Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Re:yes they should (Score 1) 1081

Every state has a minimum of 3 electors regardless of population. So, using 2010 population.


California has 37000K people and 55 electors.
3 electors per...
Alaska 721K, Delaware 900K, Montana 100K, North Dakota 675K, South Dakota 819K, Wyoming 568K, Vermont 630K = 4,413K 21 electors
4 electors per...
Hawaii 1300K, Idaho 1500K, Maine 1300K, New Hampshire 1300K, Rhode Island 1000K = 6,400K 20 Electors
5 electors per
New Mexico 2000K, Nebraska 1800K, West Virginia 1850K = 5650K 15 electors

Total: 16000K people control 56 electors.

States with half the people have as much influence over Presidential Elections as California. I am not necessarily saying the Electoral College or 2 Senators per state are bad. But, let's realize that the Founding Fathers constructed a system that gives less populous states a non-trivially greater influence over presidential elections and the Senate.

Comment Re:long methods and coupling (Score 1) 497

When I have run into long methods the solution usually is not a deep call stack, but instead that the method has chained together 20 10 line logical units. It would make the code more readable to have that be 20 sequential (not nested) calls to well named functions that accurately describe what they are doing. Often those logical units are different if/else blocks as well, so extracting them makes it easier to unit test all branches rather than trying to set up 200 different branch possibilities on one method.

Comment Re:Not a hard and fast rule... (Score 1) 281

I think that DevOps or more accurately deployment automation and Continuous Delivery make microservices possible. Without the end to end automated tests and the deployment automation to deploy the microservices, plus Electric Cloud's software or its equivalent to track what configuration of microservices is actually being tested and track that into production. Trying to manually attempt a microservices integration test across a dozen teams working on tens of microservices would likely result in a halting of development for a significant amount of time while all the integration issues are worked out, and then making sure that makes it to production is another manual headache. Automation to run that multiple microservice integration multiple time a day means less change to blame for problems which should make it easier to track the problem to a culprit and fix quickly.

Once you are doing continuous delivery into test, it should stretch to production, so you are not only testing the application but the production deployment process as well. If you want that to happen the silos between operations and development need to be broken down because it is a little silly for development to work on all that reliable automation and have operations say "Nope, we have our own automation tools.", or to wait until the very end to hand off the tools to operations instead of having them involved from the beginning.

I will say I am not attempting fine grained microservices. I like to say microservices with "micro" very loosely defined.

Comment Re:When you do microservices, it isn't one project (Score 1) 281

All large projects should be modularized in general, if for no other reason than to maintain the sanity of people. Microservices are slapping a remote interface onto a module. In Java, a little clever use of interfaces and dependency injection, and a "microservice" can be remote or local with almost no change to client code (think proxy pattern). There is change to client configuration due to local needing data store connection information and remote needing remote discovery information.

You could divide teams along these modules, but I find that causes a lot of coordination overhead. I like dividing teams along lines of delivering end user visible features and they work on whatever module is necessary. It helps to reduce the temptation to write the same thing 3 or 4 times in slightly different ways because no one wants to go through the effort required to convince some one else to put that in their module. I generally try to have teams work in the same area for a long period of time which means they usually don't have to be experts in every module, but instead a few that are core to their area, know a few that are peripheral, and have a passing understanding of the rest. It does require good test driven development, continuous integration, teams that will fix whatever regression they cause, and teams that respect other teams enough to "do unto others..." and write comprehensible and testable code.

As for deciding what your modules are... That depends. It can evolve, but it requires some one to pay attention to what is being developed and realize when new responsibilities are showing up that belong in a new module. Hopefully, many team members learn to recognize and raise modularization opportunities. I like the Domain Driven Design approach as a starting point, but that it has quite a bit of overhead that would not be appropriate for some software.

Comment Re: Rule #1 (Score 1) 281

Some other suggestions...

Another place to look is whether the Spring Planning meeting is really just Sprint Planning.
      Has backlog prioritization been rolled into Sprint planning? Could the product owner do that during the Sprint?
      Is defect triage being done during "Sprint Planning"? Then you are doing two meetings. Which might be just fine, but maybe not everyone has to be present.
    Are you doing design and coordination during the meeting? Then, you are combining sprint planning with design and coordination efforts. This may be a good thing. It can be better to just block off the time each Sprint to get everyone together, rather than doing it adhoc and having to deal with a key person having a conflict. Plus design can often elicit questions about requirements, so it can be good to have the product owner available.

Comment Use the right tool for the job... (Score 2) 175

And, if either tool works use the one you know best. And, try to write code well, so you don't get stuck when the right decision now becomes the wrong decision later because things change.

From a data store side, there are reasons to use an RDBMS and one or more NoSQL solutions all together. They handle different situations better and if you have any decent level of complexity you find the boxes defined by LAMP or MEAN too confining. I generally stay away from MySQL due to the licensing issues, but MariaDB and Amazon Aurora fit the same space. I used PostgreSQL for one solution. Of course in the AWS case you might forgo MongoDB altogether for Dynamo because why deal with operating Mongo when Amazon will take care of that with Dynamo.

Some of us have to deal with corporate standards and actually neither LAMP or MEAN is actually allowed, which makes this whole thing a pointless flame war.

Comment All yuor eggs in one basket (Score 1) 173

"Senate appropriators suggested that NASA’s plans announced earlier this year to procure Soyuz seats for missions in 2018 indicated that the agency was not confident at even this early stage that the two companies with commercial crew contracts, Boeing and SpaceX, could remain on schedule to begin flights in 2017."

Clearly the correct approach is to put all your eggs in one basket at any given time.

If you delay American crew launches until 2019, then NASA is going to procure Soyuz seats for 2019 and maybe 2020.

Comment Re:Conduit (Score 1) 557

Run wire to every room might also be reasonable, as well, but terminate at a blank wall plate. 2 Cat5E + 2 RG6 coax can run around .30/ft depending on where you get it. There are even bundles which seem to cost more than separate. But, at that price depending on the distance of the run, the wall plate and connectors can be 30-50% of the cost of the run. So, run the cable from a closet to a blank plate and only put connectors on the plates that are actually used.

Comment Re:Stupid (Score 1) 387

Global collaboration is a huge challenge that has not really been solved. Or, more accurately the solutions are still not as good as being in person. But, presumably some one made the cost benefit decision that the advantages of being global make up for the disadvantages. (skeptical undertones intended)

From a classroom perspective and any situation where you can get a bunch of people together to solve hard problems, vast amounts of whiteboard space are a highly effective tool. The problem is after spending a bunch of time going down bunny trails, back tracking, etc... which the white board is really good for, the final result needs to be put into a form that can be made persistent in some way. For something short term that will be used immediately, a photo might be good enough to refresh memories, when necessary. For something to be kept longer some one gets the thankless task of transcribing the results. Although, I thank that person profusely for doing something incredibly important but tedious.

Comment Re:No. (Score 1) 507

Of course you would say that. You would still be wrong. What you call "organizational dysfunctions" -- but everyone else would call "a normal mix of people" -- can be handled more effectively under a waterfall-like process than under an Agile one.

Well, that is the Taylor versus Deming argument. Very loosely summarized as: People are the problem versus people are the solution. That is more fundamental than Waterfall versus Agile. It seems like it is what drives people to one process or the other.

Agile is likely to be less efficient because you start lots of developers writing code before anyone has a good grip on what the project should look like

It has not been my experience that you start with developers writing lots of code before understanding the project. You have to fill the backlog before doing anything. That means prioritizing backlog items which means knowing enough about those items to gain some prioritization. Although one trap is de-prioritizing really valuable things that you don't are risky rather than doing the opposite which it prioritize efforts to resolve that risk in some way. The shorter the sprints means the smaller the backlog item needs to be to fit in a sprint which means a fairly high degree of understanding of requirement and design in order to split stories small enough. The only significant difference with respect to waterfall then is that the best splitting and understanding of requirements and design is on near term items and later items are more vague.

I can write a whole bunch of other stuff because there are consequences to every choice and those consequences are handled using different techniques. And, on top of the general consequences, every organization is a complex system with many hidden and visible feedback loop and both waterfall and agile interact with that system good, bad, and sideways. Change from one to the other and all of those feedback loops kick in and new ones are created and the whole system can react in unexpected ways. This is nothing new, Six Sigma, TQM, Lean, and others I have never heard of get brought into organizations and perturb the system in unexpected ways.

Comment Re:No. (Score 1) 507

Well, the systems engineer creates great requirements that exactly match the customer's needs at that time and then just as development starts the business environment changes and those requirements are no longer valid. If you are following the do ALL the requirements, followed by ALL the design, followed by ALL the development, etc... What has happened is you have detailed requirements and design and done a ton of work that created zero value to anyone. Presuming you make the rational decision to abandon the no longer relevant requirements. Should you push through and develop software to those requirements it probably creates even more waste.

The thing is Waterfall versus Agile is not really the argument. We are just rehashing the Taylor (Scientific Management) versus Deming (Lean) argument. The thought experiment I do is to think of a Lean organization that is currently doing waterfall. Applying Lean to the waterfall process over the course of much continuous improvement starts to look like agile+continuous delivery.

Consider the wastes of Work in Progress (WIP) and Handoffs. The first thing one would notice is that the requirements phase creates a big queue of Work in Progress (WIP) in the form of unimplemented requirements for the Design phase which creates a queue of unimplemented designs for development and so on... Queues kill cycle time and WIP is waste. So, you put in WIP limits which means less work to complete which creates faster cycles that start to look a lot like sprints. That will tend to expose the waste of hand off because the overhead of hand off gets exposed by the higher cycle time. So, you pull your Analysts, Product Owners/Customers, developers, testers closer together to reduce the handoffs between them which at its ultimate conclusion starts to look like an Agile team. I could keep going, but I think that describes the idea.

There are a variety of other ways to look at the differences. There is the people are the problem and more process and control is the solution to problems (Taylorist) versus people are the solution and making the people better is the solution. (Deming)

If you are in a Taylorist organization with no desire to change I would not attempt Agile. But, there are lots of good software development practices that have the Agile sheen on them that work in any framework.

Comment Re:No. (Score 1) 507

Well, I would not consider hiding dysfunction a good thing. Successful waterfall tends to meet the schedule and budget and fail at providing what the customer actually wanted because customers generally don't know or can't describe what they really want until the see it.

But, setting that aside. Waterfall is fragile to change. Agile is resilient to change but will expose a lack of discipline in other areas. But, there are two fundamentals that change that fragility to resilience. The first is that short iterations exposes problems quickly. Second, that continuous improvement is embraced to correct the top problems exposed every iteration. So, "fail fast" and correct the problem. Pretty much every other practice involves ways to fail faster or correct typical failures. And, some may not work within your organization, if it doesn't you find out quickly and try something else.

This is a huge weakness of the waterfall process. It typically takes a long time to get through the whole process 6 months to years. So, you can't tell if you got the requirements phase right until quite a long time after going through that phase and you don't get to correct it until the next project and assuming there is a correction it is often more process for the sake of process.

In an Agile iteration you cycle through a significant chunk of the process, and try to do a production deployment of something in as few iterations as possible to expose all the problems. And, especially when switching an existing application development to an Agile process, you are not going to get everything into the iteration all at once. Make sure you are honest and accept that you don't have fully automated end to end integration test, yet and set aside an iteration to do that and correct any problems before deploying.

I will tell you a little secret. Every time my group has made a significant change towards being more "Agile" we have generally had some fairly significant problems in the first sprint. What some might even call a failed sprint. But, we take our hits and figure out what needs to be fixed and the second sprint is generally successful, and 3 and 4 are better and better.

One could argue that waterfall is good at the extremes. Perfect organizations where everything it locked down which makes the process work, and horrible organizations where the process rigidity provides the discipline that would not otherwise exist. Agile works better in that space in the middle where people are generally good and trying to do the right thing, and you just need a framework for discovery. Whether that discovery is of the customer's real needs or internal development problems or whatever...

Slashdot Top Deals

Premature optimization is the root of all evil. -- D.E. Knuth