This does happen, although I am not sure if it is seen in red shifts. It is definitely seen in the Cosmic Microwave Background where large cold spots are thought to be due to voids along the line of sight that the CMB photon traveled. I presume a similar effect would apply to any photon crossing that void.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Paradox - "a statement or proposition that, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems senseless, logically unacceptable, or self-contradictory."
The paradox is that energy is supposed to be conserved, but space has energy and is increasing. So, we have a logically unacceptable a conclusion.
Just because it is a current paradox doesn't mean it can never be resolved. We find an energy source, or figure out the laws of physics which in this case allow for the creation of energy and is stops being a paradox.
Quantum physics calculations say the vacuum energy is one value while measurements of the curvature of the universe say it is a different value. That is a paradox especially when both Quantum physics and the physics involved in measuring the curvature of the universe seem to both be right in other respects such that making changes to resolve this paradox causes them to stop describing other things accurately. So, we have logically unacceptable conclusion.
The red shift thing doesn't look like a paradox, but a really cool test of our understanding of cosmological red shift.
And, the homogeneity problem could be a paradox linearity of expansion says the universe is homogenous, observations say it is not. But, they don't mention whether observations have done a reasonable job of determining the dark matter distribution of the universe.
There are paradoxes in the article, but it does drift into one topic that is not a paradox and another that is borderline.
One point that is missed. The dealers are not actually worried about Tesla. They are worried every other manufacturer will switch models and put them out of business.
My Prius is 9 years 130000 miles the brakes only just recently show measurabe wear since most braking is handled by the electric motor except hard braking and under 6 mph.
All of you get the hell off my lawn!!!!
But some flavors say that developers are supposed to work on all parts of the project.
I think that is a misreading. Development teams should be assigned to end to end features. Development teams can be specialized in particular features and the components associated with those features. Development teams should be allowed to work on whatever component is required to implement their features, including some that may only be peripheral to their core components in order to complete a feature. Experts should be available to assist with modifying those peripheral components. That means teams have senior members who have some responsibility to teach other teams. If after some time, the most valuable work is not longer in the area of the application that the team is specialized in, the team should start getting work in the more valuable area of the application and start ramping up their capabilities in that area.
The calculation is whether lower effectiveness at higher value work is worth more than higher effectiveness at low value work. Clearly switching teams around constantly, so they are always on the steep part of the learning curve is stupid, and should not be done. But, seeing a lot of work in a high value area of the system or organization would make it worthwhile to move that team and have them start learning something new.
Hourly billing can work, but it does require a level of trust and openness between customer and contractor that may not be possible in many cases. The customer has to trust the contractor is billing for productive hours worked at a reasonable profit. The contractor has to trust the customer enough to open up their operations sufficiently to create the trust with the customer that they are getting the value for their money.
It kind of depends on which kind of manager, and the organization. My experience is that managers love their illusions of control, but hate actual control because then they have to be responsible for the results. Also, actual control means actual effort. They want burn down charts that they can look at once a week and pretend they are contributing something useful by beating on people when they are not perfectly on the ideal line. Of course, in two week sprints at that interval burn down charts are virtually worthless.
Backlog grooming.. Hah. Nope instead of prioritizing the work, they want to spend a several weeks every 6-12 months where development, PMs, and business analysts come before them on bent knee begging for them to approve the work statement they believe should be done. Of course they don't actually understand any of what is being asked for, but want to go through the ceremony of having people give estimates and act like they are somehow contributing to the ability of the people doing actual work and talking to actual users to determine what should be done and what will fit in the time allotted. Then, they can go off and likely do the same thing with executives who also think they are justifying their salaries by looking at one page power point slides once a month or so. The needs will change during those 12 months, but we like to pretend we can predict that far ahead.
And, many military software contracts are moving to require Agile practices because they are tired of spending a lot of money on cancelled projects with nothing to show for it except a bunch of documents.
That is weird. I find it much harder to get access to actual users for externally facing software. At least for internal software the user works for the same company. The fact that the company doesn't actually want the users to contribute their knowledge of how they actually do business to the creation of the software that is supposed to help them do that business seems like a dysfunction that will doom a project regardless of the methodology. Interestingly, the project itself might not get the doom, but the users might not actually benefit, but perhaps the company mandates using the software that does not actually make the users more productive.
Of course new requirements might require re-architecting. That is not a property of Agile.
Waterfall does not provide reliable estimates either because organizational dysfunction generally results in over estimating because no one gets punished for coming in early, but you are always punished for being the slightest bit late or over budget. i.e. 20% under budget and early is rewarded 5% over budget and 5% late is punished, but who had the more accurate estimate. In addition there is the dysfunction of the effort expanding to fill the estimate.
Agile can be reliable for hitting estimates and hitting time and budget because if you use the empirical information provided by point estimation and velocity. But, it requires consistency. Consistency of teams, consistency of story (requirement) quality, and consistency of relative sizing. Actually, requirement quality can be compensated for by increasing an estimate to account for it having higher uncertainty than other stories which represents the effort prior to the sprint to reduce . But, by achieving consistency you now have velocity, which divided by total points gets the number of sprints needed. Which given a consistent team and knowing what that team costs results in knowing the cost as well.
One of the big tricks I have found is that you actually have to identify everything that might need to be done to get something to "release" (whatever that means in your org). I don't mean like in waterfall with every individual thing in a project plan with an estimate. I mean in general for all stories. i.e. Definition of Done. Then, you identify the things that can and must be done during the story Sprint and that is the sprint definition of done. What is left over can be called release definition of done, and is work that some one has to do before a change can be released. Which requires that time and resources be allocated to accomplish those things after the sprint. And, it also requires acknowledging that every one of those items that was not done during the sprint creates a risk. Typically, to the release schedule. I would argue that the perfection goal if for most "release" definition of done items to be optional, and assessed as to whether the cost of doing that thing is worth the risk that something it could catch is in the code and gets in front of a customer. For example: Does every release require a full at scale performance and scalability test that is costly and might take a lot of time? Or, is it better to assess the changes and take the calculated and explicitly stated risk that the probability of a significant problem is small versus the benefit of putting the changes into production and detecting and fixing the less significant problems later. Non-optional items to be done after the sprint are high risk in that they are probably non-optional because they have a high chance of detecting issues that would prevent the release. ie. post-sprint functional testing.
It does require being brutally honest about whether a story is complete according to the Sprint Definition of Done, if it is not done, it is not done and must be moved to another sprint (preferably the next sprint). Unfortunately, most management does not want honesty or reality. They prefer their illusions of control because when handed actual reality they then have to do something about it and realize that they don't have any actual control over what goes on in their organization. So, you get blame and punishment which simply results in everyone pretending something is done when it really is not. Which allows everyone can spend quite a while being happy until the whole thing blows up in their faces.
This is the great thing about waterfall, you can produce requirements and design documents on schedule and everyone thinks all is well. Then, you can spend a bunch of time writing code that may or may not actually work. Then, at code complete you pretend you are actually done, but you know there is a bunch of time for defect fixes during test, plus you are going to have to implement new features because a business opportunity came up that is very valuable and it was decided that it could be implemented while everything else was tested. Even during testing with defects being found everyone is fairly happy because defects are expected. But, sometime during test reality finally slaps everyone and it is realized the schedule is impossible, a lot of things are not really done, there are too many defects. So, we go about punishing everyone. Testers and developers work a bunch of over time. PMs and Managers have to get yelled at by their superiors, everyone looks for some one to blame for the bad code. People spend days trying to figure out how much the actual release is going to slip, until finally some one decides the product is good enough and it goes to production with great fanfare. And, we repeat everything all over again.
Returning back to how to deal with this in Agile, consider the situation where testing is manual, significant chunks can slip out of the sprint definition of done. Or, you can end up doing mini waterfall inside the sprints where days are set aside at the end for testing. What often gets forgotten is the continuous improvement part of Agile or Lean, or whatever you want to call it. Perhaps no one is setting aside time to move tests from manual to automated. It can be a big bang or it can be a matter of all new tests and a portion of old tests every sprint to gradually free up the testers from being human script runners to spending more time figuring out what the tests are or doing the things humans do better like verify the UI visually, or try to find the weird edge cases that are hard to think of when the application is not right there in front of you (exploratory testing).
And, on top of that is the simple truth that most of the time no one knows what they actually want until they see it. So, the exercise of writing requirements for any significant piece of software is an exercise in writing requirements that are at least 50% wrong and even worse having no idea which 50% is wrong. You then put those into a contract and get the wrongness locked in, since changes costs money and have a pain in the ass process to process to get approved. Then, add government contracting which make change even harder and its no wonder that the project fails.
The knee jerk solution is we need more detailed requirements or more analysis or whatever which tends to do little to relieve the problem that 50% of the requirements are still wrong.
I have seen many books on software development that say that a significant part of a senior developer's job is supposed to be teaching, thereby increasing the overall team's productivity. Of course what an MBA would say is that the senior developer is not doing enough programming and direct the senior developer to stop helping others to the detriment of the team.
Same here. Usually the coding mistakes occur in the easiest code, and are usually the easiest to detect and fix. The hard and undetected bugs are the ones that are the result of multiple pieces of code interacting in unexpected ways, easy, medium or hard at the individual code chunk level doesn't really matter.
The other source I have found is leaving unspecified paths open to users. You think that you don't have to prevent a user from doing something because it should work. It is actually more effort to prevent that use case. Then, you get bit in the ass because the user expects a different behavior, its not tested very well since it is unspecified, and often no one even really made a conscious choice to "add" the behavior which has effectively become a feature which now needs to be supported.
I didn't see anything in the article to indicate this currency would be anything like bitcoin, other than the title saying without any backing evidence "bitcoin-like money". It seems like any other currency except Ecuador avoids the expense of printing paper money or minting coins.
It would be extremely interesting if this is a move by Correa to put into practice Modern Monetary Theory. Correa is an economist by training, and clearly not a neo-liberal. If we see the Ecuador government switch to collecting taxes in the new currency and improving tax enforcement, I think it would be a good sign that is the direction. Assuming the neo-liberals and Washington Consensus types don't assassinate Correa before the transition is complete, it could be a fascinating case study in whether the MMT crowd gets it right. The trick will be figuring out how to get the dollar denominated sovereign debt eliminated by paying it off or conversion to the new currency or possibly fully repudiating it. The problem being that the only real way for Ecuador to get dollars is by having a trade surplus, that they have oil is advantageous. Getting people to stop holding dollars for savings, regular transactions, etc... would move those dollars from private hands to the government where they can use them to pay off dollar denominated bond holders and get out of the business of issuing debt in some other nations currency.